Prosecution Insights
Last updated: April 19, 2026
Application No. 17/726,887

PRE-DEPLOYMENT VALIDATION OF INFRASTRUCTURE TOPOLOGY

Final Rejection §101§103
Filed
Apr 22, 2022
Examiner
HAN, BYUNGKWON
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Kyndryl Inc.
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
28 currently pending
Career history
29
Total Applications
across all art units

Statute-Specific Performance

§101
34.7%
-5.3% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
19.3%
-20.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1, 2, 6, 10, 14, 17, 18, and 20 were amended. Claims 1-20 are pending and examined herein. Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1-20 are rejected under 35 U.S.C. 103. Response to Amendment The amendment filed November 3rd, 2025 has been entered. Claims 1, 2, 6, 10, 14, 17, 18, and 20 were amended. Claims 1 - 20 are pending and are examined herein. Applicant’s amendments to the claims have overcome each and every objections previously set forth in the Non-Final Rejection Office Action mailed August 6th, 2025. Response to Arguments Applicant's arguments filed November 3rd, 2025 regarding the 35 U.S.C. 101 rejection of claims 1-15 have been fully considered but they are not persuasive. The claim sets are focused on analyzing historic deployment topology using a machine learning model to generate a confidence score and using that score to decide whether to initiate deployment. Under USPTO subject matter eligibility guidance, claim limitations directed to scoring/prediction and decisioning based on processed information fall within the abstract idea. Applicant has not shown that the claims integrate the above into a practical application. The remaining limitations (e.g., deployment topology context, monitoring provider changes, updating availability state, notifications, etc.) are recited are high functional level. They amount to applying the result in a particular technological environment rather than reciting a specific technical mechanism that improves computer/network operation. Therefore, the claims do not integrate the abstract idea into a practical application and do not add significantly more. Applicant’s reliance on Enfish is not persuasive because Enfish case involved claim lanauge directed to a specific improvement to how computers operate, whereas the present claims use a generic ML based score to make a deployment decision. Applicant’s reliance on Ex parte Desjardins is also unpersuasive because that decision turned on claim language reflecting an improvement in training/operation of the ML model itself, which is not recited here. Consistent with recent guidance from the U.S. Court of Appeals for the Federal Circuit, applying generic machine learning in a particular environment, without claiming a specific technological improvement, remains ineligible. Thus, the remarks in response to the 35 U.S.C. 101 rejection are not persuasive. Applicant's arguments filed November 3rd, 2025 regarding the rejections under 35 U.S.C. 103 have been fully considered and are persuasive. The cited references do not fairly teach or suggest the claim as amended. However, a new reference, Greifeneder et al. (US 11659020 B2, 2021) is introduced in the below 35 U.S.C. 103 rejection to teach the amended features in combination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 - 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject matter. The analysis of claims 1-20, in accordance with these steps, follows. Step 1 Analysis: Step 1 is to determine whether the claim is directed to a statutory category (process, machine, manufacture, or composition of matter. Claims 1 - 9 are directed to a method, meaning that it is directed to the statutory category of process. Claims 10 – 16 are directed to a computer program product comprising one or more computer-readable storage media, which is the statutory category of manufacture. Claims 17 - 20 are directed to a system, which can be the statutory category of machine. Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. Regarding claim 1, the following claim elements are abstract ideas: generating, by the computing device, a deployment topology for requested resources of an information technology (IT) deployment request of a user, which includes the resource dependencies; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) validating,…, the IT deployment request by determining dynamic dependencies across the deployment topology, verifying dependent resources and an available state of the dependent resources, and (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) generating, by the computing device using the trained ML predictive model, a confidence score regarding a likelihood of successful implementation of the IT deployment request based on the determined dynamic dependencies of the deployment topology and the trained ML predictive model; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: training, by a computing device, a machine learning (ML) predictive model with historic infrastructure deployment data comprising time series data for a duration, data corresponding to successful deployment topologies, and data corresponding to failed deployment topologies of a plurality of resource providers in a network environment, including resource dependencies, wherein the ML predictive model comprises a neural network model; (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) ,by the computing device, (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) and dynamically implementing, by the computing device, deployment of the IT deployment request to provision the requested resources from multiple providers in the network environment based on the confidence score by automatically initiating implementation of the IT deployment request in response to the confidence score meeting or exceeding a predetermined threshold value. (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, claim 2 recites the following abstract ideas: the deployment topology indicates how constituent parts of the requested resources and other resources interacting with the requested resources are interrelated and arranged in the network environment. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) Claim 2 further recites following additional element: the neural network model comprises memory elements and is a feed-forward type model with multiple middle neural layers to enable learning complex situations with the historic infrastructure deployment data. (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) Regarding claim 3, the rejection of claim 2 is incorporated herein. Further, claim 3 recites the following abstract ideas: determining, by the computing device, an availability state of the requested resources and the other resources interacting with the requested resources, (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) wherein the generating the confidence score is further based on the state of the requested resources and the other resources interacting with the requested resources. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) Claim 3 does not recite additional elements. Regarding claim 4, the rejection of claim 1 is incorporated herein. Further, claim 4 recites the following abstract ideas: and determining, by the computing device, that the deployment topology is enabled by the master topology. (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) Claim 4 further recites following additional element: accessing, by the computing device, a master topology indicating how resources of the plurality of resource providers are interrelated and arranged in the network environment; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) Regarding claim 5, the rejection of claim 4 is incorporated herein. Further, claim 5 recites the following abstract ideas: wherein the determining that the deployment topology is enabled by the master topology is based on the one or more stored availability states. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) Claim 5 further recites following additional element: continuously monitoring, by the computing device, change event data from one or more of the plurality of resource providers in real-time; (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) and updating, by the computing device in real-time, one or more stored availability states of the resources of the plurality of resource providers in the master topology based on the change event data, (This is mere data gathering, which is a well-understood, routine conventional activity. It does not integrate the judicial exception into a practical application. See MPEP § 2106.05(d). Therefore, this does not amount to significantly more than the judicial exception.) Regarding claim 6, the rejection of claim 1 is incorporated herein. Further, claim 6 recites the following abstract ideas: determining, by the computing device, one or more of the requested resources or their dependencies can be dependency-locked; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) Claim 6 further recites following additional element: dependency-locking the one or more of the requested resources for a time period persisting until a completion of the deployment of the IT deployment request. (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) and updating, by the computing device, the trained ML predictive model based on a failure or a success of the automatically initiated implementation of the IT deployment request to improve an accuracy of predictions over time within the trained ML predictive model. (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following additional element: updating, by the computing device, the ML model based on data regarding the deployment of the IT deployment. (This is mere data gathering, which is a well-understood, routine conventional activity. It does not integrate the judicial exception into a practical application. See MPEP § 2106.05(d). Therefore, this does not amount to significantly more than the judicial exception.) Regarding claim 8, the rejection of claim 1 is incorporated herein. Further, claim 8 recites the following additional element: generating and sending, by the computing device, a notification including the confidence score to an end user device in the network environment. (This is mere data gathering, an insignificant extra solution activity, which is a well-understood, routine conventional activity. It does not integrate the judicial exception into a practical application. See MPEP § 2106.05(d). Therefore, this does not amount to significantly more than the judicial exception.) Regarding claim 9, the rejection of claim 1 is incorporated herein. Further, claim 9 recites the following additional element: computing device includes software provided as a service in a cloud environment. (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) Claim 10-- recites the following abstract idea: receive an information technology (IT) deployment request for the deployment of at least one resource in the network environment; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.) The rest of claim 10 recites substantially similar subject matter to combination of claims 1, 8 respectively and is rejected with the same rationale, mutatis mutandis. Claims 11 – 14 recites substantially similar subject matter to claims 2 – 5 respectively and are rejected with the same rationale, mutatis mutandis. Claim 15 recites the following additional element: the at least one resource and the resource dependencies or a subset of the at least one resource and the resource dependencies (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) The rest of claim 15 recites substantially similar subject matter to claim 6 respectively and is rejected with the same rationale, mutatis mutandis. Claim 16 recites the following additional element: initiate deployment of the at least one resource; (This is mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.) The rest of claim 16 recites substantially similar subject matter to claim 7 respectively and is rejected with the same rationale, mutatis mutandis. Claim 17 recites substantially similar subject matter to combination of claims 10, 11 respectively and is rejected with the same rationale, mutatis mutandis. Claim 18 recites substantially similar subject matter to claim 13 respectively and is rejected with the same rationale, mutatis mutandis. Claim 19 recites substantially similar subject matter to combination of claims 12, 14 respectively and is rejected with the same rationale, mutatis mutandis. Claim 20 recites substantially similar subject matter to claim 15 respectively and is rejected with the same rationale, mutatis mutandis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 – 5, 7 – 9 are rejected under 35 U.S.C. 103 as being unpatentable over Jeuk et al. (U.S. Pub. 2021/0392049 A1) in view of Kanso et al. (U.S. Pub. 12124924 B2 ), further in view of Greifender et al. (U.S. Pub. 11659020 B2). Regarding Claim 1, Jeuk teaches A method, comprising: training, by a computing device, a machine learning (ML) predictive model with historic infrastructure deployment data comprising time series data for a duration, … where in the ML predictive model comprises a neural network model; ([0043] of Jeuk states “Additionally or alternatively, operational data of the topology may be collected indirectly and transmitted by monitoring devices or systems of the resource domains 120 (e.g., network monitors, performance monitors, administrative systems, etc.). The operational data may be transmitted through the gateways and/or other edge computing devices of the resource domains 120, to the machine-learning engine 112, either on a periodic basis (e.g., every second, every 10 seconds, every minute, every hour, etc.) or when triggered by specific events (e.g., performance thresholds, software or system errors, support ticket creation, etc.), and may be different for different sources of the operational data. “ [0046] of Jeuk states “the desired state (and/or intent for use) of the network topology may be used in conjunction with network topology operational data as training data for the machine-learning model(s).” [0047] of Jeuk states “As another example, a tenant-defined desired state may indicate more complex details for the deployment, such as bandwidth requirements for specific links between micro-services in the service mesh environment (e.g., a micro-service on a particular cloud may require 10 Mbps to communicate with a different micro-service that runs in a different resource domain).” [0056] of Jeuk states “Constraints 208 (also referred to as deployment constraints) may be predefined deployment conditions, specifications, or limitations that are provided by an administrator user and/or may be based on predefined policies of the network topology. Constraints may be associated with any node or groups of nodes within the deployed topology, or with the topology as a whole.” [0076] of Jeuk states “At 404, the machine-learning engine 112 may select, generate, and/or customize the software code to implement one or more machine-learning algorithms to be used to train the machine-learning model. The machine-learning algorithms selected at 404 may include, for example, one or more regression algorithms, instance-based algorithms, Bayesian algorithms, decision tree algorithms, clustering algorithms artificial neural network algorithms, and/or deep learning algorithms.”) generating, by the computing device, a deployment topology for requested resources of an information technology (IT) deployment request of a user,… ([0046] of Jeuk states “In addition to receiving operational data from the deployed topology, in some examples the machine-learning engine 112 also may receive as input data a desired state of the network topology. In such examples, the topology development system 110 may determine a desired state of the network topology using various techniques, including using service level agreements (SLAs) associated with the topology, templates, and/or tenant/application specific metadata. The topology development system 110 use the various techniques to determine desired state(s) of the topology and then transmitted to the machine-learning engine 112 for analysis. In some examples, the desired state of the network topology may correspond to the intent of the network topology”. [0047] of Jeuk states “In various examples, the desired state network topology may be defined in various different ways. In some instances, a tenant may define the desired state for the topology by defining it through API calls or a graphical user interface in which the tenants specifies what is expected from the environment.”) validating, by the computing device, the IT deployment request ([0018] of Jeuk states “Initially, logical deployment models may be generated, validated, optimized for deployment within the hybrid (e.g., multiple resource domain) environment, followed by the physical deployment of the models using internetworking devices and virtual processing resources. Logical (or intent-based) topology models may be created and validated, and then used for physical realization of hybrid network topologies that are functionally equivalent to the logical models. Formal methods may be used to validate the correctness of the logical model without the need for test-based verification, and deployment generation instructions may be used automatically generate a physical network topology for the deployment of the hybrid network topology, including a set of deployment instructions that may be transmitted between gateways in different resource domains.” [0060] of Jeuk states “After generating one or more prospective modified network topologies, based on the output from the machine-learning engine 112, as well as based on the logical model input 206, constraints 208, and resource inventories 210, the model generation engine 114 may validate and/or optimize the model using the model verification component 214, which also may verify the functional equivalency of the modified network topologies to the logical model input 206. The validated modified network topology then may be provided to the deployment generation engine 116, which may use an optimization system 216 and deployment generation system 218 to modify the physical network topology for deployed network topology, by transmitting sets of deployment instructions from to gateways within the different resource domains 120 to implement the determined updating of the network topology.”) verifying dependent resources and an available state of the dependent resources, and ([0057] of Jeuk states “Resource inventories 210 may define the available resources within each of the resource domains 120, thus determining the limits for prospective deployments across the resource domains 120 (e.g., multi-cloud deployments, hybrid network deployments, etc.). The particular data within resource inventories 210, and which components collect the resource inventories and provide the data to the topology deployment system 110 may vary in different embodiments. In some examples, a resource inventory may include data specifying which resource domains 120 are available, the specific resources that are available within each resource domain 120, such as the types of network devices and capabilities, the types of storage devices, the processing units, software services, and/or the types of virtual machine images that are available.” Dependent resources and availability of these resources are verified.) and the trained ML predictive model ([0014] of Jeuk states “The techniques described herein may further include providing the operational data received from the network topology to a trained machine-learning model, and receiving output from the model which may be used, along with the resource inventories from the workload resource domains and/or constraints associated with the network topology, to determine an updated topology model which may be used to modify the existing network topology deployed across the workload resource domains.”) and dynamically implementing, by the computing device, deployment of the IT deployment request to provision the requested resources from multiple providers in the network environment ([0026] of Jeuk states “As noted above, hybrid network topologies may refer to large-scale computing systems and/or solutions that are implemented by several different nodes of various different types (e.g., network nodes, storage nodes, compute nodes, software application or service nodes, virtual machines or instances, etc.), which are deployed across multiple different resource domains, and which interact and communicate to provide the functionality of the overall topology.” [0044] of Jeuk states “may take into account dynamic updates occurring at the nodes or elsewhere within the resource domains 120.” Jeuk dynamically provision resources across multiple resource domains (resources from multiple providers), with dynamic updates); Jeuk does not explicitly teach data corresponding to successful deployment topologies, and data corresponding to failed deployment topologies of a plurality of resource providers in a network environment, including resource dependencies, …, which includes the resource dependencies; … based on the determined dynamic dependencies of the deployment topology… by determining dynamic dependencies across the deployment topology generating, by the computing device using the trained ML predictive model, a confidence score regarding a likelihood of successful implementation of the IT deployment request based on the confidence score by automatically initiating implementation of the IT deployment request in response to the confidence score meeting or exceeding a predetermined threshold value. However, Kanso explicitly teaches that data corresponding to successful deployment topologies, and data corresponding to failed deployment topologies (Column 10 lines 25 – 35 of Kanso states “At 210, given the input, the OSPS runs the machine learning model and predicts probability of success of the deployment. The machine learning model can output for example a probability rate between 0 and 1 regarding potential success rate of operator deployment in the new environment. At 212, risk levels are categorized as high/medium/low/none based on a probability determined by the OSPS. If policy per risk level is low or none then the OSPS deploys the operator and monitors its operations at 218 in a Kubernetes (K8s) cluster 220.” Column 12 lines 27 – 34 of Kanso states “In active learning, a large amount of data can be randomly sampled from an underlying distribution and this large dataset can be used to train the model 108 to perform a prediction, e.g., success of operator deployment in a PaaS environment. A query can be made to unlabeled sets 404 which then the model 108 utilizes to predict success or failure at 406 and can use feedback from a user to validate the prediction.”) generating, by the computing device using the trained ML predictive model, a confidence score regarding a likelihood of successful implementation of the IT deployment request (Column 8 lines 14 – 19 of Kanso states “Benefits can be achieved including, but not limited to, receiving a new operator and a namespace to be deployed as an input, and using the trained machine-learning model 108 to predict probability of success of deployment of the operator in the new environment.” Column 9 lines 64 – Column 10 lines 7 of Kanso states “In continuous operation, a new operator and a namespace are received to be deployed as input and using the trained machine-learning model 108, probability of success of operator deployment can be predicted. This efficient methodology facilitates mitigating risk of disrupting business operations and cost, reduces time of changes to deploy operators by analyzing success beforehand, and efficiently scale operator deployment to other environments through increased confidence that operator deployment in a particular environment has probability (e.g., within acceptable range) of success or not.”) based on the confidence score by automatically initiating implementation of the IT deployment request in response to the confidence score meeting or exceeding a predetermined threshold value. (Column 10 Lines 61 – 68 of Kanso states “As described in the high-level overview architecture 300, an operator is deployed in namespace 302. OSPS 304 is a success prediction service that is used to determine whether the deployment can be successful or not and provide a success rate, e.g., between 0 and 1. In this example, probability of success is 0.8. There might be some policies 306 to govern the process wherein if success criteria of probability are greater than X (wherein X is a pre-determined threshold), then deploy the operator, and if below threshold C do not deploy the operator. In this case, if 0.8 is categorized as high then the operator is deployed and if it is failed then it can be reported. “ Column 12 lines 27 – 40, 60 – 68 of Kanso states “In active learning, a large amount of data can be randomly sampled from an underlying distribution and this large dataset can be used to train the model 108 to perform a prediction, e.g., success of operator deployment in a PaaS environment. A query can be made to unlabeled sets 404 which then the model 108 utilizes to predict success or failure at 406 and can use feedback from a user to validate the prediction. Similarly, labeled set operators 402 can be in namespaces that are trained based on training data. At 406, the model 108 predicts based on the trained data success or failure and can obtain feedback from the user to validate the prediction. The training data can be updated with namespaces and labels as desired… A system can allow predefined actions per risk prediction. In some cases when risk is low (threshold), an automation can deploy operators with higher confidence. On the other hand, when risk is high (threshold), the system can notify Cl/CD experts to investigate and resolve potential risks or confirm that identified issues may not fail operators. These actions can be learned from data or active learning or adjusted as more data is collected over time.”) Greifeneder teaches that of a plurality of resource providers in a network environment, including resource dependencies, …, which includes the resource dependencies; … based on the determined dynamic dependencies of the deployment topology… by determining dynamic dependencies across the deployment topology (Column 20 lines 30 – 38, lines 53 - 63 of Greifeneder states “Vertical relationship records 810 may be used to model relationships between different topology entities on different vertical levels of the topological model. As an example, a vertical relationship record may be used to model that a virtualized host computer system is virtualized by a specific hypervisor, that a specific hypervisor is managed by a specific virtualization manager, that a specific process group is running on a specific host computer system or that a specific process group provides a specific service… Horizontal relationship records 820 may be used to model communication activities between different topological entities of the same type or on the same topological level. A horizontal relationship record 820 may contain but is not limited to a client entityId 821 identifying the topology entity record that models the topological entity that performed the client side part of a communication, a server entityId 822 identifying the topological entity that performed the server side of the communication and a server port 822 further identifying the server side part of the communication activity.” Column 24 lines 30 – 43, 49 – 68 of Greifeneder states “The communication event buffer is used to store communication topology events 520 for which no corresponding communication topology event 520 representing the opposing communication endpoint has been received. As the OS agents that monitor communication activities on different operating systems and hosts operate independently and asynchronous to each other, corresponding communication topology events 520 typically arrive at the topology processer 331 at different points in time. The communication event buffer is used to keep unpaired communication topology event 520 until the corresponding opposing communication topology event is received… If otherwise step 1012 detects that a corresponding opposing communication topology event is available, the process continues with step 1014 which removes the corresponding topology event record 520 from the communication event buffer. Subsequent step 1015 checks if a horizontal relationship record 820 representing the communication described by the two matching communication topology events 520 is already available in the topology repository 337. This may e.g. be performed by searching for a horizontal relationship record 820 with a client entityId 821 corresponding to PGid 522 and OSid 523 of the communication topology event 520 with client server indicator 525 indicating the client side endpoint of the communication and a server entityId 822 corresponding to PGid 522 and OSid 523 of the communication topology event 520 with client server indicator indicating the sever side endpoint of the communication and with a sever port 823 equal to port section of the IP address and port field 524 describing the server side endpoint of the communication. In case a matching horizontal relationship record 820 is found, it may be updated with data received with the two communication topology events. If no matching horizontal relationship record 820 is found, a new one is created” Relation records from Greifeneder explicitly represent entity dependency links in a topology and dependencies are dynamically derived as events arrive over time.) It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings of Jeuk, Kanso, and Greifeneder because they are directed to the same overall technical problem of reliably validating and deploying requested topologies in a dynamic network environment, and each reference provides compatible pieces of the claimed workflow. Jeuk teaches generating a topology deployment model, validating, verifying the model for deployment, and implementing deployment via deployment instructions in a multi-domain environment. Kanso teaches training an ML model to output a success likelihood score and automatically initiating deployment when the score meets/exceeds a predetermined threshold. Greifeneder teaches maintaining a topology repository that stores dependency records and updating the stored topology based on received topology change event data over time, providing updated dependency and availability state information for validation. One with ordinary skill in the art would be motivated to incorporate the teachings of Greifeneder and Kanso into that of Jeuk to reduce failed deployments and automate the decision based on a computed deployment likelihood of success, and to continuously maintain the updated master topology that reflects current dependencies and resource availability. The combination would have been predictable as integrating a threshold based deployment and real time topology update into Jeuk’s deployment workflow would have been a routine choice to improve robustness and accuracy of the deployment. Regarding claim 2, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Jeuk, Kanso, and Greifeneder teaches the deployment topology indicates how constituent parts of the requested resources and other resources interacting with the requested resources are interrelated and arranged in the network environment. ([0026] of Jeuk states “As noted above, hybrid network topologies may refer to large-scale computing systems and/or solutions that are implemented by several different nodes of various different types (e.g., network nodes, storage nodes, compute nodes, software application or service nodes, virtual machines or instances, etc.), which are deployed across multiple different resource domains, and which interact and communicate to provide the functionality of the overall topology.” Column 20 lines 30 – 38, lines 53 - 63 of Greifeneder states “Vertical relationship records 810 may be used to model relationships between different topology entities on different vertical levels of the topological model. As an example, a vertical relationship record may be used to model that a virtualized host computer system is virtualized by a specific hypervisor, that a specific hypervisor is managed by a specific virtualization manager, that a specific process group is running on a specific host computer system or that a specific process group provides a specific service… Horizontal relationship records 820 may be used to model communication activities between different topological entities of the same type or on the same topological level. A horizontal relationship record 820 may contain but is not limited to a client entityId 821 identifying the topology entity record that models the topological entity that performed the client side part of a communication, a server entityId 822 identifying the topological entity that performed the server side of the communication and a server port 822 further identifying the server side part of the communication activity.” Column 24 lines 30 – 43, 49 – 68 of Greifeneder states “The communication event buffer is used to store communication topology events 520 for which no corresponding communication topology event 520 representing the opposing communication endpoint has been received. As the OS agents that monitor communication activities on different operating systems and hosts operate independently and asynchronous to each other, corresponding communication topology events 520 typically arrive at the topology processer 331 at different points in time. The communication event buffer is used to keep unpaired communication topology event 520 until the corresponding opposing communication topology event is received… If otherwise step 1012 detects that a corresponding opposing communication topology event is available, the process continues with step 1014 which removes the corresponding topology event record 520 from the communication event buffer. Subsequent step 1015 checks if a horizontal relationship record 820 representing the communication described by the two matching communication topology events 520 is already available in the topology repository 337. This may e.g. be performed by searching for a horizontal relationship record 820 with a client entityId 821 corresponding to PGid 522 and OSid 523 of the communication topology event 520 with client server indicator 525 indicating the client side endpoint of the communication and a server entityId 822 corresponding to PGid 522 and OSid 523 of the communication topology event 520 with client server indicator indicating the sever side endpoint of the communication and with a sever port 823 equal to port section of the IP address and port field 524 describing the server side endpoint of the communication. In case a matching horizontal relationship record 820 is found, it may be updated with data received with the two communication topology events. If no matching horizontal relationship record 820 is found, a new one is created” Relation records from Greifeneder explicitly represent entity dependency links in a topology and dependencies are dynamically derived as events arrive over time.) the neural network model comprises memory elements and is a feed-forward type model with multiple middle neural layers to enable learning complex situations with the historic infrastructure deployment data. (Column 7 lines 47 – 60 of Kanso states “For example, such one or more ML and/or AI models can include, but are not limited to, a pretrained language representation model (e.g., transformer based) with fine-tuning (e.g., a bidirectional encoder representations from transformers (BERT) model), a long short-term memory (LSTM) model, a bidirectional LSTM model with a conditional random field (CRF) layer (abbreviated as BiLSTM-CRF), a shallow or deep neural network model, a convolutional neural network (CNN) model, a decision tree classifier, and/or any supervised or unsupervised ML and/or AI model that can perform natural language processing (NLP) using a similarity learning process and/or a similarity search process to define the above described mappings.” Column 13 lines 20 – 30 of Kanso states “This neural model represents snippets of code as continuous distributed vectors also known as code embedding. A process begins by decomposing code 602 to a collection of paths which is a fully connected layer while learning atomic representation of the path along with learning how to aggregate a set of context vectors. Code is employed and processed into a neural network 604 which generates a vector as an output 608. This is a numerical representation of the code given in the beginning. This vector representation can be used in machine-learning model 610 and generate predictions 612.” [0066] of Jeuk states “In some embodiments, the data may include details of previously implemented network topologies, such as the structure of the topology and the types and characteristics of nodes, constraints, resource inventories, as well as historical performance data for the previous topologies.” Kanso describes types of neural network models that could be used and deep neural network implies multiple layers. Also, it illustrates example of neural network architecture with fully connected layers which is standard feed forward neural network component.) Regarding claim 3, the rejection of claim 2 is incorporated herein. Furthermore, the combination of Jeuk, Kanso, and Greifeneder teaches determining, by the computing device, an availability state of the requested resources and the other resources interacting with the requested resources, ([0057] of Jeuk states “Resource inventories 210 may define the available resources within each of the resource domains 120, thus determining the limits for prospective deployments across the resource domains 120… A resource inventory may include data specifying which resource domains 120 are available, the specific resources that are available within each resource domain 120, such as the types of network devices and capabilities, the types of storage devices, the processing units, software services, and/or the types of virtual machine images that are available.” ) wherein the generating the confidence score is further based on the state of the requested resources and the other resources interacting with the requested resources. ([0050] of Jeuk states “Performance levels may be expressed, for example, on a numeric scale or as a percentage of an “acceptable” or “optimal” performance level for the node, subnetwork, or topology, etc. In some examples, the trained models 204 may output a matrix of current performance levels associated with the different nodes of the network topology.” [0052] of Jeuk states “As described below in more detail, trained models 204 may provide their respective outputs, such as performance level data and/or optimization recommendations, based on the received operational input data, taking into account (e.g., during the model training processes) the structure of the topology and the types and characteristics of individual nodes, as well as constraints, resource inventories, etc., using the historical performance data for previous topologies from which the models 204 were trained.”) Regarding claim 4, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Jeuk, Kanso, and Greifeneder teaches accessing, by the computing device, a master topology indicating how resources of the plurality of resource providers are interrelated and arranged in the network environment; ([0026] of Jeuk states “[0026] As noted above, hybrid network topologies may refer to large-scale computing systems and/or solutions that are implemented by several different nodes of various different types (e.g., network nodes, storage nodes, compute nodes, software application or service nodes, virtual machines or instances, etc.), which are deployed across multiple different resource domains, and which interact and communicate to provide the functionality of the overall topology.” [0057] states “Resource inventories 210 may define the available resources within each of the resource domains 120, thus determining the limits for prospective deployments across the resource domains 120.” Jeuk models multiple resource domains (plurality of resource providers) as part of a hybrid network topology. Network topology shows how nodes (resources) across resource domains are interrelated and arranged.) and determining, by the computing device, that the deployment topology is enabled by the master topology. ([0060] states “After generating one or more prospective modified network topologies, based on the output from the machine-learning engine 112, as well as based on the logical model input 206, constraints 208, and resource inventories 210, the model generation engine 114 may validate and/or optimize the model using the model verification component 214, which also may verify the functional equivalency of the modified network topologies to the logical model input 206.” Jeuk validates a deployment topology with constraints, and resource inventories. These contain data of hybrid network topology discussed above. Therefore, the process of validating and verifying described in [0060] determines whether the deployment topology is “enabled” by the master topology.) Regarding claim 5, the rejection of claim 4 is incorporated herein. Furthermore, the combination of Jeuk, Kanso, and Greifeneder teaches continuously monitoring, by the computing device, change event data from one or more of the plurality of resource providers in real-time; ([0043] of Jeuk states “The operational data may be transmitted through the gateways and/or other edge computing devices of the resource domains 120, to the machine-learning engine 112, either on a periodic basis (e.g., every second, every 10 seconds, every minute, every hour, etc.) or when triggered by specific events (e.g., performance thresholds, software or system errors, support ticket creation, etc.), and may be different for different sources of the operational data.” Column 2 Lines 36 – 45 of Greifeneder states “The model should be provided by a monitoring system that detects changes of the deployment of processes and operating systems and changes of virtualization or transactional interdependencies in real-time and also updates the model in real-time. The model should depict all applications run by the monitored data center and should also show all influencing factors form the virtualization, service reuse and background processing perspective that can have an impact on the performance of the applications run by the monitored data center.” Column 10 Lines 14 – 20 of Greifeneder states “The topology data from OS agents 310 and virtualization agents 316 is received by a monitoring node 329, which forwards it to the topology processor 331. The topology processor 331 processes the received topology data and updates the integrated topology model stored in the topology repository to reflect the topology changes reported by received topology data.”) and updating, by the computing device in real-time, one or more stored availability states of the resources of the plurality of resource providers in the master topology based on the change event data, ([0044] of Jeuk states “The operational data received by the machine-learning engine 112 may generally correspond to data collected after an initial deployment of the network topology within the resource domains 120… so that the operational data may provide practical status and system feedback data, and may take into account dynamic updates occurring at the nodes or elsewhere within the resource domains.” Jeuk dynamically receives continuous operational data as feedback and updates in response to conditions of nodes(resources). Column 10 Lines 16 – 20 of Greifeneder states “The topology processor 331 processes the received topology data and updates the integrated topology model stored in the topology repository to reflect the topology changes reported by received topology data.” Column 25 Lines 12 – 16 of Greifeneder states “Creating or updating of topology entity records 801, vertical relationship records 810 or horizontal relationship records 820 may also contain setting or updating data describing the availability or existence of topological entities or relationships between topological entities.”) wherein the determining that the deployment topology is enabled by the master topology is based on the one or more stored availability states. ([0060] of Jeuk states “After generating one or more prospective modified network topologies, based on the output from the machine-learning engine 112, as well as based on the logical model input 206, constraints 208, and resource inventories 210, the model generation engine 114 may validate and/or optimize the model using the model verification component 214, which also may verify the functional equivalency of the modified network topologies to the logical model input 206.” During the validation described in claim 4, it must consider the current state of resources (like in resources inventory) to determine whether deployment topology is enabled. Since Jeuk updates a deployable topology using resource inventory/constraints and Greifeneder teaches storing and updating availability state data in a topology repository in real time, POSITA would use Greifeneder’s dynamically updated availability states as Jeuk’s available resources input when deciding whether the requested deployment topology is enabled.) Regarding claim 7, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Jeuk, Kanso, and Greifeneder teaches updating, by the computing device, the ML model based on data regarding the deployment of the IT deployment. (Column 11 Lines 20 – 23 of Kanso states “Training data 318 collected over time, can be used for active learning 320 where a new operator is deployed to a known namespace.” Column 10 Lines 49 – 55 of Kanso states “Through this continual (or iterative) operation, prior histories of operator deployment can be utilized in various configured environments to predict success of operator deployment using machine-learning. Active learning can be performed continually to train and develop a machine-learning model based on new deployments of operators in different environments.” Column 12 Lines 60 – 68 of Kanso states “A system can allow predefined actions per risk prediction. In some cases when risk is low (threshold), an automation can deploy operators with higher confidence. On the other hand, when risk is high (threshold), the system can notify Cl/CD experts to investigate and resolve potential risks or confirm that identified issues may not fail operators. These actions can be learned from data or active learning or adjusted as more data is collected over time.” Kanso teaches updating the ML model using deployment data as it performs active learning based on new deployments (using data collected from deployments to improve the model over time). [0044] of Jeuk states “The operational data received by the machine-learning engine 112 may generally correspond to data collected after an initial deployment of the network topology within the resource domains 120, and during time periods concurrent with the execution/operation of the various nodes of the topology, so that the operational data may provide practical status and system feedback data, and may take into account dynamic updates occurring at the nodes or elsewhere within the resource domains 120 which could not be predicted during the initial model generation process for the topology.”) Regarding claim 8, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Jeuk, Kanso, and Greifeneder teaches generating and sending, by the computing device, a notification including the confidence score to an end user device in the network environment. (Column 12, Lines 53-55 of Kanso states “Policies can be defined based on probability; if success probability is greater than a threshold, then it is considered to be at low risk wherein actions are predefined based on risk factor.” Column 12, Lines 63-67 of Kanso states “On the other hand, when risk is high (threshold), the system can notify Cl/CD experts to investigate and resolve potential risks or confirm that identified issues may not fail operators. These actions can be learned from data or active learning or adjusted as more data is collected over time.”) Regarding claim 9, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Jeuk, Kanso, and Greifeneder teaches the computing device includes software provided as a service in a cloud environment. ([0035] of Jeuk states “In some embodiments, the input received from the user device 130 describing the prospective network topology may account for multiple resource domains 120, including at least one public cloud network associated with a public cloud network provider, and at least one private cloud network associated with an enterprise. The enterprise may include programs, service models, and applications which reside in an on-premise datacenter of the enterprise. Such programs, service models, and applications may include software-as-a-service (SaaS) programs, platform-as-a-service (PaaS) programs, infrastructure-as-a-service (IaaS) programs, Load Balancing-as-a-service (LBaaS) programs, application frontends, application backends, application classification programs, firewalls or others.”) Claims 6, 10 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jeuk et al. (U.S. Pub. 2021/0392049 A1) in view of Kanso et al. (U.S. Pub.US 12124924 B2 ), Greifender et al. (U.S. Pub. 11659020 B2), further in view of Liu et al. (U.S. Pub. 2020/0110598 A1). Regarding claim 6, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Jeuk, Kanso, and Greifeneder teaches Updating, by the computing device, the trained ML predictive model based on a failure or a success of the automatically initiated implementation of the IT deployment request to improve an accuracy of predictions over time within the trained ML predictive model. (Column 10 Lines 61 – 68 of Kanso states “As described in the high-level overview architecture 300, an operator is deployed in namespace 302. OSPS 304 is a success prediction service that is used to determine whether the deployment can be successful or not and provide a success rate, e.g., between 0 and 1. In this example, probability of success is 0.8. There might be some policies 306 to govern the process wherein if success criteria of probability are greater than X (wherein X is a pre-determined threshold), then deploy the operator, and if below threshold C do not deploy the operator. In this case, if 0.8 is categorized as high then the operator is deployed and if it is failed then it can be reported. “ Column 12 lines 27 – 40, 60 – 68 of Kanso states “In active learning, a large amount of data can be randomly sampled from an underlying distribution and this large dataset can be used to train the model 108 to perform a prediction, e.g., success of operator deployment in a PaaS environment. A query can be made to unlabeled sets 404 which then the model 108 utilizes to predict success or failure at 406 and can use feedback from a user to validate the prediction. Similarly, labeled set operators 402 can be in namespaces that are trained based on training data. At 406, the model 108 predicts based on the trained data success or failure and can obtain feedback from the user to validate the prediction. The training data can be updated with namespaces and labels as desired… A system can allow predefined actions per risk prediction. In some cases when risk is low (threshold), an automation can deploy operators with higher confidence. On the other hand, when risk is high (threshold), the system can notify Cl/CD experts to investigate and resolve potential risks or confirm that identified issues may not fail operators. These actions can be learned from data or active learning or adjusted as more data is collected over time.”) The combination of Jeuk, Kanso, and Greifeneder does not explicitly teach determining, by the computing device, one or more of the requested resources or their dependencies can be dependency-locked; and dependency-locking the one or more of the requested resources for a time period persisting until a completion of the deployment of the IT deployment request. However, Liu teaches determining, by the computing device, one or more of the requested resources or their dependencies can be dependency-locked; ([0019] of Liu states “Deployment of a modified service affects the functioning of other services that make use of the service.” [0020] of Liu states “To address the problems that deployment of modified executable code can cause in other services, a dependency lock is placed on candidate code to prevent deployment until tests on the dependent services are successfully completed”) and dependency-locking the one or more of the requested resources for a time period persisting until a completion of the deployment of the IT deployment request. ([0021] of Liu states “As described herein, developers of other services that rely on a service are enabled to place a dependency lock on the service. As a result, deployment of the service is only allowed when tests of the other services complete successfully…Alternatively, deployment continues after failure of a test only after an administrator of the service being deployed is made aware of the test failure.”) It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings of Jeuk, Kanso, Greifeneder, and Liu because they are directed to the same overall technical problem of reliably validating and deploying requested topologies in a dynamic network environment, and each reference provides compatible pieces of the claimed workflow. Jeuk teaches generating a topology deployment model, validating, verifying the model for deployment, and implementing deployment via deployment instructions in a multi-domain environment. Kanso teaches training an ML model to output a success likelihood score and automatically initiating deployment when the score meets/exceeds a predetermined threshold. Greifeneder teaches maintaining a topology repository that stores dependency records and updating the stored topology based on received topology change event data over time, providing updated dependency and availability state information for validation. Liu teaches dependency locking in deployment pipelines such that deployment is prevented until dependent tests complete successfully, i.e., locking for a time period persisting until completion of required conditions. One with ordinary skill in the art would be motivated to incorporate the teachings of Liu into combination of Jeuk, Kanso, and Greifeneder as Liu’s dependency locking into the combined system of Jeuk, Kanso, and Greifeneder would be predictable and widely used deployment technique to prevent premature deployment of dependent resources until required dependent actions are completed. It would have been predicable combination to reduce cascading failures and increasing robustness in automated deployment. Regarding claim 10, the combination of Jeuk, Kanso, Greifeneder, and Liu teaches receive an information technology (IT) deployment request for the deployment of at least one resource in the network environment; ([0083] of Liu states “In operation 1210, the deployment control module 260 receives, via the user interface and the UI module 270, a request to deploy a target software project. In the example of FIG. 7, the request is for deployment of the EasyDNS API.”) The rest of claim 10 recites substantially similar subject matter as combination of claims 1 and 8 respectively, and is rejected with the same rationale, mutatis mutandis. Claims 11 – 15 recite substantially similar subject matter as claims 2 – 6 respectively, and are rejected with the same rationale, mutatis mutandis. Regarding claim 16, the rejection of claim 10 is incorporated herein. The combination of Jeuk, Kanso, Greifeneder, and Liu teaches initiate deployment of the at least one resource; ([0023] of Jeuk states “(iii) determining updated models and deploying updated network topologies across multiple resource domains based on the trained machine-learning models.”) The rest of claim 16 recites substantially similar subject matter as claim 7 respectively, and is rejected with the same rationale, mutatis mutandis. Claim 17 recites substantially similar subject matter as combination of claims 10 and 11 respectively, and is rejected with the same rationale, mutatis mutandis. Claim 18 recites substantially similar subject matter to claim 13 respectively and is rejected with the same rationale, mutatis mutandis. Claim 19 recites substantially similar subject matter to combination of claims 12 and 14 respectively and is rejected with the same rationale, mutatis mutandis. Claim 20 recites substantially similar subject matter to claim 15 respectively and is rejected with the same rationale, mutatis mutandis. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BYUNGKWON HAN whose telephone number is (571)272-5294. The examiner can normally be reached M-F: 9:00AM-6PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at (571)272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BYUNGKWON HAN/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Apr 22, 2022
Application Filed
Jul 26, 2025
Non-Final Rejection — §101, §103
Aug 28, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Examiner Interview Summary
Nov 03, 2025
Response Filed
Feb 12, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month