Prosecution Insights
Last updated: April 19, 2026
Application No. 18/077,965

BUILDING A ROBOT MISSION BASED ON A ROBOT-AGNOSTIC GRAPHICAL USER INTERFACE (GUI)

Non-Final OA §103
Filed
Dec 08, 2022
Examiner
XIE, THEODORE L
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Yokogawa Electric Corporation
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
1y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-2.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Fast prosecutor
1y 7m
Avg Prosecution
38 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
36.6%
-3.4% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Application The following is a Non-Final Office Action. In response to Examiner's communication on 11/03/2025, Applicant on 01/05/2026, amended Claims 1, 6, 16-17 and 20, cancelled Claim 2, and added Claim 21. Claims 1, 3-21 are now pending in this application and have been rejected below. Information Disclosure Statement The information disclosure statement (IDS) was submitted on February 11, 2026. The submission is in compliance with the provisions of37 CFR 1.97. Accordingly, the information disclosure statement isacknowledged and has been considered by the examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/05/2026 has been entered. Response to Amendment Applicants’ amendments with respect to the rejections under 35 USC 101 have been found to surmount the rejections. Accordingly, they have been withdrawn below. Applicants’ amendments are insufficient to overcome the 35 USC 103 rejections set forth in the previous action. Therefore, these rejections have been updated to address the amendments and are maintained below. Response to Arguments – 35 USC § 101 Applicant’s amendments to independent Claims 1, 16, 20 including the limitation “initiating performance of the third set of tasks using at least one of the first robot device and the second robot device.” have been found to surmount the 35 USC 101 rejection as integrating the claim into a practical application, by virtue of initiating performance using a robot device, which causes the claimed subject matter to be outside the bounds of a mental process applied with generic computing components. In light of this, the rejections have been withdrawn. Response to Arguments – 35 USC § 103 Applicant’s arguments with respect to rejections under 35 USC 103 have been fully considered but are moot in light of updated rejections necessitated by Applicant’s amendments. See the rejection of Claims 1, 16 and 20 under 35 USC 103 below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Statement of the Rejection Claims 1, 3-7, 10-11,13-21 are rejected under 35 U.S.C. 103 as being unpatentable over Huang (US 20200171671 A1) in view of Ramanujam(US 20200101611 A1) in further view of Blonder(US 20220250658 A1). Claim 1 Huang teaches: A method for managing operation of two or more robot devices of varying types in an environment, the method comprising: (In [0240], “The system can then be configured to determine the differences and similarities between robots and choose executable commands with high likelihoods of success between various robots having similar features. By way of example, various robots may have different “hand” portions or grasping mechanisms, however, the grasping mechanism may be provided about a standard forearm and elbow. In such cases the executable commands selected by the system can include grasper specific sub-commands, but then because the wrist, forearm, and elbow are similar, the system can select those commands from a known set of successful operations for performing the same or similar tasks”. These differences in robot configuration are implicitly types.) electronically receiving a first set of commands associated with operating a first robot device to perform a first set of tasks; recording the first set of commands to a first mission file; electronically receiving a second set of commands associated with operating a second robot device to perform a second set of tasks; recording the second set of commands to a second mission file; (In [0128], Huang discloses this invention may include a developer portal, "wherein the developer portal can be configured to receive packets containing one or more executable commands for the one or more articulating joints configured to cause a particular robot to perform a predetermined sub - task, wherein a plurality of packets and subtasks are retained in the historical database." Taking packets, which can contain multiple tasks, to analogize to mission files, here we see the means to store into a historical record.) identifying a third set of tasks common to the first set of tasks and the second set of tasks; (In [0249] when a user issues a command, it is disclosed the system can "utilize the extracted text to ultimately determine a match between the commands embedded in the received data and one or more previously performed actions saved in an action database provided on the cloud-based server".) mapping the third set of tasks to the first mission file and the second mission file, (In [0305], Huang discloses that after a query is tokenized, "The tokens are then used as the input for the searching and matching process of historical databases or previously generated trainings , etc. so as to find matches. The tokens of the command can then be associated with particular answers within the database to later increase searching accuracy based on similar tokens." This can work in tandem with the returning of search results outlined in [0253, facilitating the provision of the previously received packets.) electronically receiving, from a machine learning model, a prediction of which of the first robot device and the second robot device has a greater likelihood of success to accomplish the third set of tasks; In [0013] of Huang, a probabilistic determination regarding mission success based on environmental parameters is made. As illustrated in Fig. 10 of Huang, an AI Platform 408 is a core component in the operational platform that calculates mission probabilities and operates robots; the application of machine learning can also be found in the AI trainer module in 408, in [0308] of Huang, “augmentation input or supervised machine learning wherein the AI is allowed to proceed unassisted but with human oversight with a human operator ready to intervene if the robot begins acting outside of a desired set of actions. Also shown here is an AI platform 408 which can organize various aspects of the AI as particular modules and kernel functions such as the deep learning algorithms, big data access and searching, IOT connectivity, sensor controls such as smart vision, navigation mapping, and NLP”. in response receiving a search request indicating one or more tasks of the third set of tasks, and based on the mapping, providing the first mission file and the second mission file along with the prediction to a user; (In [0253], it is stated that, "Particularly early on when building historical databases , users can review the results [when performing a search query for past actions] and select whether the sequence resulted in success, whether a particular sequence will result in success , or even which probabilities the network should display and return the proposed label".) electronically receiving a selection by the user of either the first robot device or the second robot device for use in execution of the third set of tasks In [0242], “The system can then be configured to determine the differences and similarities between robots and choose executable commands with high likelihoods of success between various robots having similar features”. In [0284], “It will be appreciated that in some embodiments, human operators can be utilized so as to provide the oversight for all cloud robot operations and override any AI controls, providing an added level of safety”. and initiating performance of the third set of tasks using the first robot device or the second robot device In [0014], “wherein, when the probabilistic determination is above a pre-determined threshold, the processor is configured to communicate one or more necessary executable commands to the robot control unit, wherein the robot control unit then causes each of the articulating joints of the robot to execute each of the necessary executable commands so as to complete the user command”. selected by the user. In [0284], “It will be appreciated that in some embodiments, human operators can be utilized so as to provide the oversight for all cloud robot operations and override any AI controls, providing an added level of safety”. Huang does not expressly disclose the remaining limitations. However, Ramanujam teaches: wherein the first set of tasks, the second set of tasks, and the third set of tasks comprise robot-agnostic tasks absent an indication of a robot type in the first set of commands and the second set of commands Ramanujam discloses in [0013], "Embodiments herein disclose a method and system for generating, simulating, and executing a robotic solution recipe to execute a robotic solution by a plurality of robots, according to an embodiment. In one embodiment, the system includes a software framework that allows a user to generate a reusable robotic solution recipe, simulate execution of the reusable robotic recipe, and deploy the reusable robotic solution recipe at a hardware-vendor agnostic controller that executes the reusable robotic solution recipe". Ramanujam discloses a system for managing robot tasks. Huang discloses a system meant to allow humans to interface with a plurality of robots. Each reference discloses means for managing robots. Applying the hardware-agnostic methodology as recorded in Ramanujam is applicable to Huang as they are fundamentally concerned with the same problem; that of administering control to robot units. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the hardware agnostic approach of Ramanujan’s platform. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit that doing so would generalize the search engine, enabling matching records to be returned based on what is there without necessitating potentially irrelevant details to be included. Huang combined with Ramanujam does not expressly disclose the remaining limitations. However, Blonder teaches: wherein the prediction takes into account an estimated energy usage for the first robot device and the second robot device to complete the third set of tasks In [0063], “The mission engine may analyze a plurality of operational parameters of a plurality of autonomous vehicles, specifically with respect to the computed mission parameters in order to identify one or more autonomous vehicles which are determined to be capable of carrying out the inspection mission and successfully acquire the required inspection data. The operational parameters may include, for example, a type (aerial, ground, naval), terrain capability, speed, range, altitude, maneuverability, power consumption, availability, operational cost and/or the like” and an estimated amount of time taken by the first robot device and the second robot device to complete the third set of task In [0153], “In another example, the mission engine 230 may select the capable autonomous vehicle(s) 202 according to a fourth optimization function defining an earliest completion time of the inspection mission. For example, assuming the mission engine 230 identifies two capable autonomous vehicles 202 for carrying out the inspection mission, for example, the UAV 202A1 and a certain ground autonomous vehicle 202B. While the UAV 202A1 may complete the inspection in shorter time (duration) compared to the ground autonomous vehicle 202B, due to limited availability of the UAV 202A1 the inspection mission may be completed sooner when using the ground autonomous vehicle 202B. The mission engine 230 may therefore select the ground autonomous vehicle 202B to conduct the inspection mission”. Blonder discloses a system for managing unmanned vehicles. Huang combined with Ramanujam discloses a system meant to allow humans to interface with a plurality of robots. Each reference discloses a system for controlling robots. Extending the storage of “environment variables” as recorded in Huang combined with Ramanujam to include storing information like vehicle capabilities and types, as mentioned in Blonder, is applicable to Huang as the purpose of incorporating environmental variables into Huang’s database is to facilitate searching for relevant sets of tasks. An operational parameter such as vehicle capability and type could clearly be a relevant concern. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to record details of vehicle configuration as taught in Blonder and apply that to the storage of environmental variables as taught in Huang combined with Ramanujam. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit that adopting Blonder's technique would enable users to differentiate between comparable missions with more granularity, paying particular attention to potentially relevant distinctions between robot configurations. Claim 3 Huang teaches: The method of claim 1, further comprising: confirming a mapping of the third set of tasks to the first mission file and the second mission file based on at least one of: an input from the user, confirming the mapping; and a comparison result of a confidence value associated with the mapping to a threshold. (Huang discloses in [0252], in providing guidance to robots navigating environments, “the system can build a robust database and map similar motions in similar environments such that eventually the system can map a motion sequence having a high probability of success in when receiving similar future commands in similar future environmental situations".) Claim 4 Huang teaches: The method of claim 1, further comprising: generating a third mission file comprising the third set of tasks in response to mapping the third set of tasks to the first mission file and the second mission file; and providing the third mission file. (Huang states that in [0110], "the processor of the cloud based robotic intelligence engine is configured to utilize a deep learning neural network to recognize one or more similar past environmental scenarios and recognize one or more historical actions which have resulted in successful execution of a user command , and subsequently generate a set of executable commands which will have an increased probabilistic likelihood of success in a determined real - time environment." Since the set of executable commands are able to be saved to a historic database, the limitation is disclosed. Here, providing is taken to be the generated set of executable commands that have been mapped to previous scenarios. Alternatively, if providing is meant to be returning to the user for viewing, the logic for such is outlined in [0253].) Claim 5 Huang combined with Ramanujam and Blonder teaches all of the disclosed limitations of Claim 1, as outlined above. Huang does not explicitly teach: The method of claim 1, further comprising: identifying, based on the third set of tasks, at least one robot capability common to the two or more robot types. However, Blonder teaches: The method of claim 1, further comprising: identifying, based on the third set of tasks, at least one robot capability common to the first robot device and the second robot device. Blonder teaches this limitation in [0063] by disclosing analyzing " operational parameters of a plurality of autonomous vehicles…in order to identify one or more autonomous vehicles which are determined to be capable of carrying out the inspection mission", with details such as “operational parameters of their sensors, for example, number of sensors, sensing technology, resolution, Field of View (FOV), required illumination and/or the like". It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to record details of vehicle configuration as taught in Blonder and apply that to the storage of environmental variables as taught in Huang combined with Ramanujam. Motivation to do so comes from the same rationale as outlined above with respect to Claim 1. Claim 6 Huang combined with Ramanujam and Blonder teaches all of the disclosed limitations of Claim 1, as outlined above. Huang does not explicitly teach: The method of claim 5, further comprising: associating one or more tasks of the third set of tasks to one or more robot capabilities; associating the one or more robot capabilities to at least the first robot device and the second robot device; However, Blonder teaches: The method of claim 5, further comprising: associating one or more tasks of the third set of tasks to one or more robot capabilities; associating the one or more robot capabilities to at least the first robot device and the second robot device; Again, in [0052] of Huang, it is disclosed that we have means to store "the associated result [of a command] and the associated executable commands and the associated environmental parameters to the cloud - based robotic intelligence engine for inclusion in the historical database for future access, and recognize "similar past environmental scenarios and recognize one or more similar past environmental scenarios" in [0110]. In Blonder, such variables associated with a mission are disclosed to encompass "a type (aerial, ground, naval)", with such types provided as an example. So, we have that a stored task can be associated with environmental information(robot capabilities), and that these robot capabilities, can be associated with a specific type(aerial, ground, or naval) of robot that might be required to execute the set of tasks. Huang does teach: identifying, based on the associating of the one or more robot capabilities to at least the first robot device and the second robot device, the at least one robot capability common to at least the first robot device and the second robot device As established in the combination of Huang and Blonder with respect to claim 5, the historical database in Huang would be able to store data as disclosed by Blonder regarding vehicle capabilities and type. Searching based off similarities is already provided for in Huang; in [0241], it's disclosed that a "humanoid robot can break down the various motions saved in the historical database with regard to the arm robot and implement the potential executable commands to the arm motions of the humanoid robot." Further in [0242], The system can then be configured to determine the differences and similarities between robots and choose executable commands with high likelihoods of success between various robots having similar features." Continuing off [0240], "in such cases the executable commands selected by the system can include grasper specific sub-commands". It would then be able to be determined, based off the performance of a grasper specific command, that the robot that performed it is capable of "grasping". Analogously, by extending the information derived from associated environmental parameters as taught by Blonder, the identification of capabilities common to robot types would be facilitated. One of ordinary skill in the art would have recognized that applying the known technique of Blonder would have yielded predictable results and resulted in an improved system for the same reasons as stated above with respect to Claim 1. Claim 7 Huang combined with Ramanujam and Blonder teaches all of the disclosed limitations of Claim 5 as outlined above. Huang does not teach: The method of claim 5, wherein the at least one robot capability comprises: a mobility type; and a payload type. However, Blonder teaches: The method of claim 5, wherein the at least one robot capability comprises: a mobility type; and a payload type. In [0063], Blonder gives us the mean to associate attributes analogous to mobility, i.e. "terrain capability, speed, range, altitude, maneuverability", and payload, as "moreover, each of the autonomous vehicles may be equipped with one or more sensors, for example, an imaging sensor (e.g. camera, video camera, night vision camera, Infrared camera, thermal imaging sensor, etc.), a depth and/or ranging sensor (e.g., Light imaging, Detection, and Ranging (LiDAR) sensor, Radio Detection and Ranging (RADAR) sensor, Sound Navigation Ranging (SONAR) sensor, etc.) and/or the like. The operational parameters of each of the autonomous vehicles may therefore further include one or more operational parameters of their sensors". One of ordinary skill in the art would have recognized that applying the known technique of Blonder would have yielded predictable results and resulted in an improved system for the same reasons as stated above with respect to Claim 1. Claim 10 Huang teaches: The method of claim 1, further comprising: displaying a graphical user interface (GUI), wherein at least one of receiving the first set of commands, receiving the second set of commands, receiving the search request, and providing the first mission file and the second mission file are via the GUI. ([0515] “Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server , or that includes a middleware component, e.g., an application server , or that includes a front-end component, e.g. , a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification”.) Claim 11 Huang combined with Ramanujam and Blonder teaches all of the disclosed limitations of Claim 1, as outlined above. Huang does not explicitly teach: The method of claim 1, wherein the first set of tasks, the second set of tasks, and the third set of tasks comprise robot-agnostic tasks absent an indication of a target robot type. However, Ramanujam teaches: The method of claim 1, wherein the two or more robot devices are of varying vendors or manufacturers, and wherein the first set of tasks, the second set of tasks, and the third set of tasks comprise robot-agnostic tasks absent an indication of a robot vendor or manufacturer in the first set of commands and the second set of commands. Ramanujam discloses in [0013], "Embodiments herein disclose a method and system for generating, simulating, and executing a robotic solution recipe to execute a robotic solution by a plurality of robots, according to an embodiment. In one embodiment, the system includes a software framework that allows a user to generate a reusable robotic solution recipe, simulate execution of the reusable robotic recipe, and deploy the reusable robotic solution recipe at a hardware-vendor agnostic controller that executes the reusable robotic solution recipe". Claim 13 Huang combined with Ramanujam and Blonder teaches all of the disclosed limitations of Claim 1, as outlined above. Huang teaches: The method of claim 1, wherein at least one of the first robot device and the second robot device is a flight-capable drone. See [0058] of Huang, “ According to some embodiments of the present invention, there are provided methods, systems and computer program products for automatically selecting and operating one or more autonomous vehicles, for example, an aerial autonomous vehicle, a ground, autonomous vehicle, a naval autonomous vehicle and/or the like to acquire (collect, capture, etc.) inspection data relating to one or more assets…” Claim 14 Huang teaches: The method of claim 1, wherein a first set of candidate tasks performable by the first robot device at least partially overlap a second set of candidate tasks performable by the second robot device. (This is disclosed by the same sections of Huang, namely [0128], [0249], [0253] as cited above. Taking packets, which can contain multiple tasks, to analogize to mission files, here we see the means to draw from the historical record for the sake of providing them, taken to be executing, based on overlap, i.e., the aforementioned similarities that were the basis of the search. ) Claim 15 Huang teaches: The method of claim 1, wherein: the first set of commands comprises a first sequence of commands; and the second set of commands comprises a second sequence of commands. (This is disclosed by the same sections of Huang, namely [0128], [0249], [0253] as cited above. Taking packets, which can contain multiple tasks, to analogize to mission files, here we see the means to draw from the historical record for the sake of providing them, taken to be executing. As the broadest reasonable interpretation of a “sequence of commands” encompasses a set of commands, we understand this limitation to be disclosed in Huang in the cited paragraphs.) Claim 16 Huang teaches: A system for managing operation of two or more robot devices of varying types in an environment, (In [0240], “The system can then be configured to determine the differences and similarities between robots and choose executable commands with high likelihoods of success between various robots having similar features. By way of example, various robots may have different “hand” portions or grasping mechanisms, however, the grasping mechanism may be provided about a standard forearm and elbow. In such cases the executable commands selected by the system can include grasper specific sub-commands, but then because the wrist, forearm, and elbow are similar, the system can select those commands from a known set of successful operations for performing the same or similar tasks”. These differences in robot configuration are implicitly types.) the system comprising: a processor; and a memory coupled with the processor, (Huang discloses in [0511], "The processes and logic flows described in this disclosure can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output". Elaborating in [0512], "Generally , a processor will receive instructions and data from a read - only memory, or a random-access memory, or both. Elements of a computer can include a processor configured to perform actions in accordance with instructions and one or more memory devices for storing instructions and data.") wherein the memory stores data that, when executed by the processor, enables the processor to: (In [0512], "Generally, a processor will receive instructions and data from a read - only memory, or a random-access memory, or both. Elements of a computer can include a processor configured to perform actions in accordance with instructions and one or more memory devices for storing instructions and data. “As Huang states in [ 0515 ], "Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back - end component , e.g. , as a data server , or that includes a middleware component , e.g. , an application server , or that includes a front - end component , e.g. , a client computer having a graphical user interface".) electronically receive a first set of commands associated with operating a first robot device to perform a first set of tasks; record the first set of commands to a first mission file; electronically receive a second set of commands associated with operating a second robot device to perform a second set of tasks; record the second set of commands to a second mission file; (In [0128], Huang discloses this invention may include a developer portal, "wherein the developer portal can be configured to receive packets containing one or more executable commands for the one or more articulating joints configured to cause a particular robot to perform a predetermined sub - task, wherein a plurality of packets and subtasks are retained in the historical database." Taking packets, which can contain multiple tasks, to analogize to mission files, here we see the means to store into a historical record.) identify a third set of tasks common to the first set of tasks and the second set of tasks; (In [0249] when a user issues a command, it is disclosed the system can "utilize the extracted text to ultimately determine a match between the commands embedded in the received data and one or more previously performed actions saved in an action database provided on the cloud-based server".) map the third set of tasks to the first mission file and the second mission file, (In [0305], Huang discloses that after a query is tokenized, "The tokens are then used as the input for the searching and matching process of historical databases or previously generated trainings , etc. so as to find matches. The tokens of the command can then be associated with particular answers within the database to later increase searching accuracy based on similar tokens." This can work in tandem with the returning of search results outlined in [0253, facilitating the provision of the previously received packets.) electronically receive, from a machine learning model, a prediction of which of the first robot device and the second robot device has a greater likelihood of success to accomplish the third set of tasks; In [0013] of Huang, a probabilistic determination regarding mission success based on environmental parameters is made. As illustrated in Fig. 10 of Huang, an AI Platform 408 is a core component in the operational platform that calculates mission probabilities and operates robots; the application of machine learning can also be found in the AI trainer module in 408, in [0308] of Huang, “augmentation input or supervised machine learning wherein the AI is allowed to proceed unassisted but with human oversight with a human operator ready to intervene if the robot begins acting outside of a desired set of actions. Also shown here is an AI platform 408 which can organize various aspects of the AI as particular modules and kernel functions such as the deep learning algorithms, big data access and searching, IOT connectivity, sensor controls such as smart vision, navigation mapping, and NLP”. in response to receipt a search request indicating one or more tasks of the third set of tasks, and based on the mapping, providing the first mission file and the second mission file along with the prediction to a user; (In [0253], it is stated that, "Particularly early on when building historical databases , users can review the results [when performing a search query for past actions] and select whether the sequence resulted in success, whether a particular sequence will result in success , or even which probabilities the network should display and return the proposed label".) and electronically receive a selection by the user of either the first robot device or the second robot device for use in execution of the third set of tasks In [0242], “The system can then be configured to determine the differences and similarities between robots and choose executable commands with high likelihoods of success between various robots having similar features”. In [0284], “It will be appreciated that in some embodiments, human operators can be utilized so as to provide the oversight for all cloud robot operations and override any AI controls, providing an added level of safety”. and initiate performance of the third set of tasks using the first robot device or the second robot device In [0014], “wherein, when the probabilistic determination is above a pre-determined threshold, the processor is configured to communicate one or more necessary executable commands to the robot control unit, wherein the robot control unit then causes each of the articulating joints of the robot to execute each of the necessary executable commands so as to complete the user command”. selected by the user. In [0284], “It will be appreciated that in some embodiments, human operators can be utilized so as to provide the oversight for all cloud robot operations and override any AI controls, providing an added level of safety”. Huang does not expressly disclose the remaining limitations. However, Ramanujam teaches: wherein the first set of tasks, the second set of tasks, and the third set of tasks comprise robot-agnostic tasks absent an indication of a robot type in the first set of commands and the second set of commands Ramanujam discloses in [0013], "Embodiments herein disclose a method and system for generating, simulating, and executing a robotic solution recipe to execute a robotic solution by a plurality of robots, according to an embodiment. In one embodiment, the system includes a software framework that allows a user to generate a reusable robotic solution recipe, simulate execution of the reusable robotic recipe, and deploy the reusable robotic solution recipe at a hardware-vendor agnostic controller that executes the reusable robotic solution recipe". It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the hardware agnostic approach of Ramanujan’s platform to the system of Huang. Motivation to do so comes from the same rationale outlined above with respect to Claim 1. Huang combined with Ramanujam does not expressly disclose the remaining limitations. However, Blonder teaches: wherein the prediction takes into account an estimated energy usage for the first robot device and the second robot device to complete the third set of tasks In [0063], “The mission engine may analyze a plurality of operational parameters of a plurality of autonomous vehicles, specifically with respect to the computed mission parameters in order to identify one or more autonomous vehicles which are determined to be capable of carrying out the inspection mission and successfully acquire the required inspection data. The operational parameters may include, for example, a type (aerial, ground, naval), terrain capability, speed, range, altitude, maneuverability, power consumption, availability, operational cost and/or the like” and an estimated amount of time taken by the first robot device and the second robot device to complete the third set of task In [0153], “In another example, the mission engine 230 may select the capable autonomous vehicle(s) 202 according to a fourth optimization function defining an earliest completion time of the inspection mission. For example, assuming the mission engine 230 identifies two capable autonomous vehicles 202 for carrying out the inspection mission, for example, the UAV 202A1 and a certain ground autonomous vehicle 202B. While the UAV 202A1 may complete the inspection in shorter time (duration) compared to the ground autonomous vehicle 202B, due to limited availability of the UAV 202A1 the inspection mission may be completed sooner when using the ground autonomous vehicle 202B. The mission engine 230 may therefore select the ground autonomous vehicle 202B to conduct the inspection mission”. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to record details of vehicle configuration as taught in Blonder and apply that to the storage of environmental variables as taught in Huang combined with Ramanujam. Motivation to do so comes from the same rationale as outlined above in Claim 1. Claim 17 Huang teaches:The system of claim 16, wherein the data, when executed by the processor, further enables the processor to: display a graphical user interface (GUI), and electronically receive the first set of commands via the GUI (In [0512], "Generally, a processor will receive instructions and data from a read - only memory, or a random-access memory, or both. Elements of a computer can include a processor configured to perform actions in accordance with instructions and one or more memory devices for storing instructions and data." As Huang states in [ 0515 ], "Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back - end component , e.g. , as a data server , or that includes a middleware component , e.g. , an application server , or that includes a front - end component , e.g. , a client computer having a graphical user interface".) Claim 18 Huang teaches: The system of claim 16, wherein the data, when executed by the processor, further enables the processor to: confirm a mapping of the third set of tasks to the first mission file and the second mission file based on at least one of: an input from the user confirming the mapping; and a comparison result of a confidence value associated with the mapping to a threshold. (Huang discloses in [0252], in providing guidance to robots navigating environments, “the system can build a robust database and map similar motions in similar environments such that eventually the system can map a motion sequence having a high probability of success in when receiving similar future commands in similar future environmental situations".) Claim 19 Huang teaches: The system of claim 16, wherein the data, when executed by the processor, further enables the processor to: generate a third mission file comprising the third set of tasks in response to mapping the third set of tasks to the first mission file and the second mission file. (Huang states that in [0110], "the processor of the cloud based robotic intelligence engine is configured to utilize a deep learning neural network to recognize one or more similar past environmental scenarios and recognize one or more historical actions which have resulted in successful execution of a user command , and subsequently generate a set of executable commands which will have an increased probabilistic likelihood of success in a determined real - time environment." Since the set of executable commands are able to be saved to a historic database, the limitation is disclosed. Here, providing is taken to be the generated set of executable commands that have been mapped to previous scenarios. Alternatively, if providing is meant to be returning to the user for viewing, the logic for such is outlined in [0253].) Claim 20 Huang teaches: A robot system for managing operation of two or more robot devices of varying types in an environment, the system comprising: (In [0240], “The system can then be configured to determine the differences and similarities between robots and choose executable commands with high likelihoods of success between various robots having similar features. By way of example, various robots may have different “hand” portions or grasping mechanisms, however, the grasping mechanism may be provided about a standard forearm and elbow. In such cases the executable commands selected by the system can include grasper specific sub-commands, but then because the wrist, forearm, and elbow are similar, the system can select those commands from a known set of successful operations for performing the same or similar tasks”. These differences in robot configuration are implicitly types.) a database comprising a data record of mission files associated with a set of recorded robot missions; As disclosed in [0259], Huang mentions "a historical database containing a plurality of historical actions and associated environmental parameters". a processor; and a memory coupled with the processor, Huang discloses in [0511], "The processes and logic flows described in this disclosure can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output". Elaborating in [0512], "Generally, a processor will receive instructions and data from a read - only memory, or a random-access memory, or both. Elements of a computer can include a processor configured to perform actions in accordance with instructions and one or more memory devices for storing instructions and data." wherein the memory stores data that, when executed by the processor, enables the processor to: electronically receive a search request indicating one or more candidate tasks to be performed with respect to the environment The processor is disclosed in [0512], "Generally , a processor will receive instructions and data from a read - only memory, or a random-access memory, or both. Elements of a computer can include a processor configured to perform actions in accordance with instructions and one or more memory devices for storing instructions and data". In [0128], Huang discloses this invention may include a developer portal, "wherein the developer portal can be configured to receive packets containing one or more executable commands for the one or more articulating joints configured to cause a particular robot to perform a predetermined sub - task, wherein a plurality of packets and subtasks are retained in the historical database , and wherein the processor is configured to determine a plurality of appropriate packets and chain the packets together to execute a user command". Taking packets, which can contain multiple tasks, to analogize to mission files, here we see the means to draw from the historical record for the sake of providing them, taken to be executing. map the one or more candidate tasks to at least a first mission file and a second mission file from the data record of mission files In [0305], Huang discloses that after a query is tokenized, "The tokens are then used as the input for the searching and matching process of historical databases or previously generated trainings , etc. so as to find matches. The tokens of the command can then be associated with particular answers within the database to later increase searching accuracy based on similar tokens." This can work in tandem with the returning of search results outlined in [0253], outlined above, facilitating the provision of the previously received packets. to identify at least one of the one or more candidate tasks common to at least the first mission file and the second mission file, In [0249] when a user issues a command, it is disclosed the system can "utilize the extracted text to ultimately determine a match between the commands embedded in the received data and one or more previously performed actions saved in an action database provided on the cloud based server". wherein the first mission file corresponds to commands associated with a first robot device and the second mission file corresponds to commands associated with a second robot device; In [0128], Huang discloses this invention may include a developer portal, "wherein the developer portal can be configured to receive packets containing one or more executable commands for the one or more articulating joints configured to cause a particular robot to perform a predetermined sub - task, wherein a plurality of packets and subtasks are retained in the historical database." Taking packets, which can contain multiple tasks, to analogize to mission files, here we see the means to store into a historical record. and electronically receive, from a machine learning model, a prediction of which of the first robot device and the second robot device has a greater likelihood of success to accomplish the at least one of the one or more candidate tasks; In [0013] of Huang, a probabilistic determination regarding mission success based on environmental parameters is made. As illustrated in Fig. 10 of Huang, an AI Platform 408 is a core component in the operational platform that calculates mission probabilities and operates robots; the application of machine learning can also be found in the AI trainer module in 408, in [0308] of Huang, “augmentation input or supervised machine learning wherein the AI is allowed to proceed unassisted but with human oversight with a human operator ready to intervene if the robot begins acting outside of a desired set of actions. Also shown here is an AI platform 408 which can organize various aspects of the AI as particular modules and kernel functions such as the deep learning algorithms, big data access and searching, IOT connectivity, sensor controls such as smart vision, navigation mapping, and NLP”. based on the one or more candidate tasks being mapped, provide In [0253], it is stated that, "Particularly early on when building historical databases , users can review the results [when performing a search query for past actions] and select whether the sequence resulted in success, whether a particular sequence will result in success , or even which probabilities the network should display and return the proposed label". and electronically receive a selection by the user of either the first robot device or the second robot device for use in execution of the at least one of the one or more candidate tasks. In [0242], “The system can then be configured to determine the differences and similarities between robots and choose executable commands with high likelihoods of success between various robots having similar features”. In [0284], “It will be appreciated that in some embodiments, human operators can be utilized so as to provide the oversight for all cloud robot operations and override any AI controls, providing an added level of safety”. and initiate performance of the third set of tasks using the first robot device or the second robot device In [0014], “wherein, when the probabilistic determination is above a pre-determined threshold, the processor is configured to communicate one or more necessary executable commands to the robot control unit, wherein the robot control unit then causes each of the articulating joints of the robot to execute each of the necessary executable commands so as to complete the user command”. selected by the user. In [0284], “It will be appreciated that in some embodiments, human operators can be utilized so as to provide the oversight for all cloud robot operations and override any AI controls, providing an added level of safety”. Huang does not teach: wherein the one or more candidate tasks comprise robot-agnostic tasks and the search request is absent an indication of target robot devices, target robot types, or both in association with performing the one or more candidate tasks However, Ramanujam teaches: wherein the search request is absent an indication of target robot devices, target robot types, or both in association with performing the one or more candidate tasks [0013]...Embodiments herein discloses a method and system for generating, simulating, and executing a robotic solution recipe to execute a robotic solution by a plurality of robots, according to an embodiment. In one embodiment, the system includes a software framework that allows a user to generate a reusable robotic solution recipe, simulate execution of the reusable robotic recipe, and deploy the reusable robotic solution recipe at a hardware-vendor agnostic controller that executes the reusable robotic solution recipe. [0018]...In case of simulation, the robotic solution recipe is executed by a simulator and a simulated execution of the robotic solution is displayed at a web User Interface (UI). In case of real-world execution, the hardware vendor-agnostic controller executes the robotic solution recipe and sends instruction to the hardware controllers corresponding to the selected hardware components for executing the robotic solution. The vendor-agnostic hardware controller communicates with the controllers of the selected hardware components to send instructions for executing the robotic solution recipe and receive real-time robot solution execution data that may be used to send execution instructions to the robots. One of ordinary skill in the art would have recognized that applying the known technique of Ramanujam would have yielded predictable results and resulted in an improved system for the same reasons as stated above with respect to Claim 1. Huang combined with Ramanujam does not expressly disclose the remaining limitations. However, Blonder teaches: wherein the prediction takes into account an estimated energy usage for the first robot device and the second robot device to complete the third set of tasks In [0063], “The mission engine may analyze a plurality of operational parameters of a plurality of autonomous vehicles, specifically with respect to the computed mission parameters in order to identify one or more autonomous vehicles which are determined to be capable of carrying out the inspection mission and successfully acquire the required inspection data. The operational parameters may include, for example, a type (aerial, ground, naval), terrain capability, speed, range, altitude, maneuverability, power consumption, availability, operational cost and/or the like” and an estimated amount of time taken by the first robot device and the second robot device to complete the third set of task In [0153], “In another example, the mission engine 230 may select the capable autonomous vehicle(s) 202 according to a fourth optimization function defining an earliest completion time of the inspection mission. For example, assuming the mission engine 230 identifies two capable autonomous vehicles 202 for carrying out the inspection mission, for example, the UAV 202A1 and a certain ground autonomous vehicle 202B. While the UAV 202A1 may complete the inspection in shorter time (duration) compared to the ground autonomous vehicle 202B, due to limited availability of the UAV 202A1 the inspection mission may be completed sooner when using the ground autonomous vehicle 202B. The mission engine 230 may therefore select the ground autonomous vehicle 202B to conduct the inspection mission”. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to record details of vehicle configuration as taught in Blonder and apply that to the storage of environmental variables as taught in Huang combined with Ramanujam. Motivation to do so comes from the same rationale as outlined above with respect to Claim 1. Claim 21 Huang combined with Ramanujam and Blonder teaches all the limitations of Claim 1 as outlined above. Huang teaches: The method of claim 1, wherein the prediction indicates the first robot device has a greater likelihood of success than the second robot device In [0013], “wherein the processor is configured to make a probabilistic determination regarding the likelihood of successfully completing the particular user command based on the particular one or more detected environmental parameters given a particular user command by comparing a determined proposed path having a similar executable command sequence having been performed in similar environmental parameters as contained in the historical database”. Huang does not expressly disclose the remaining limitations. However, Ramanujam teaches: when the third set of tasks is to be performed a threshold number of time over a period of time, In [0035], “In one embodiment, the robotics solution logic includes different software functions for defining the robotic solution. For example, the DSL code may include a loop function that is used to define a repetitive activity that is to be executed by one or more robots during execution of the robotic solution. For example, a “while” loop function that define activities, including repetitive activities, to be performed by one or more robots when the “while” loop function condition is satisfied. The robotics solution logics included in the DSL code may also allow a user to define a condition, for example an in-then-else condition, and actions that needs to be performed based on satisfying or non-satisfying of the condition”. We understand the threshold to be defined by the iterations where the while loop is satisfied, and the period to be the duration of “execution of the robotic solution”. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the hardware agnostic approach of Ramanujan’s platform to the system of Huang. Motivation to do so comes from the same rationale outlined above with respect to Claim 1. Huang combined with Ramanujam does not expressly disclose the remaining limitations. However, Blonder teaches: and wherein the prediction indicates the second robot device has a greater likelihood of success than the first robot device when the third set of tasks has a financial cost that exceeds a threshold value. In [0028], “In a further implementation form of the first, second and/or third aspects, the one or more capable autonomous vehicles are selected according to one or more optimization functions. The one or more optimization functions are directed to minimize one or more operational objectives of the inspection mission. The one or more operational objective are members of a group consisting of: a shortest route, a lowest operational cost, a minimal number of autonomous vehicles, a shortest mission time and/or a maximal utilization of the plurality of autonomous vehicles”. Note that in this case, success encompasses meeting the threshold cost. In [0177], “Naturally, this feedback loop may be repeated in a plurality of iterations each to initiate an additional inspection mission until the mission engine 230 determines that the acquired inspection data is compliant and/or until one or more mission thresholds defined for the inspection mission are reached, for example, a maximum mission number, a maximum accumulated mission time, a maximum accumulated cost and/or the like”. It is implicit that exceeding a maximum cost would induce the system of Blonder to reallocate task assignment with operational cost in mind. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to record details of vehicle configuration as taught in Blonder and apply that to the storage of environmental variables as taught in Huang combined with Ramanujam. Motivation to do so comes from the same rationale as outlined above in Claim 1. Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Huang (US 2020/0171671 A1) in view of Ramanujam(US 20200101611 A1) in further view of Blonder(US 20220250658 A1) in further view of Ghorbanian-Matloob(US 20210048814 A1), henceforth referred to as GM. Claim 8 Huang combined with Ramanujam and Blonder teaches all the disclosed limitations of Claim 1 as outlined above. Huang does not teach: The method of claim 1, further comprising: identifying a set of waypoints common to the first mission file and the second mission file, wherein providing the first mission file and the second mission file is based on identifying the set of waypoints. However, GM teaches: The method of claim 1, further comprising: identifying a set of waypoints common to the first mission file and the second mission file, wherein providing the first mission file and the second mission file is based on identifying the set of waypoints. GM discloses in [0142], "The memory 1020 can store data 1030 that can be obtained (e.g., received, accessed, written, manipulated, created, generated, etc.) and/or stored. The data 1030 can include, for instance, services data (e.g., trip data, route data, user data, etc.), sensor data, map data, perception data, prediction data, motion planning data, object states and/or state data, object motion trajectories, feedback data, fault data, log data, and/or other data/information as described herein. In some implementations, the computing device(s) 1005 can obtain data from one or more memories that are remote from the autonomous vehicle 105". To clarify that this is accessible from the management system, in [0144], "Computing tasks discussed herein as being performed at computing device(s) remote from the autonomous vehicle can instead be performed at the autonomous vehicle (e.g., via the vehicle computing system), or vice versa. Such configurations can be implemented without deviating from the scope of the present disclosure". We submit that this can work in combination with the recognition of "similar past environmental scenarios and recognize one or more similar past environmental scenarios" disclosed in [0110] of Huang, wherein the logic for "providing" is disclosed in [0253] above, with search results viewable; as this identification is performed on the basis of environmental scenarios associated with past tasks, we are merely expanding our definition of environmental variables to include waypoints as taught by GM. Huang combined with Ramanujam and Blonder discloses a system meant to allow humans to interface with a plurality of robots. Ghorbanian-Matloob et al. discloses an autonomous vehicle management platform. Each reference discloses systems directed to controlling robots. The technique of recording mission-relevant vehicle data as recorded in Ghorbanian-Matloob et al. is directly applicable to the system of Huang combined with Ramanujam and Blonder as we are merely expanding upon the recording of environmental variables in the database of Huang. It would have been obvious to one having ordinary skill in the art at the effective filling date of the invention to apply the technique of recording mission-relevant vehicle data as taught by Ghorbanian-Matloob et al. to the system of Huang combined with Ramanujam and Blonder. Motivation to do so comes from the fact that the claim is plainly directed to the predictable result of combining known items in the prior art, with the expected benefit that storing such data in the mission database would enable users to differentiate between comparable missions with more granularity, paying particular attention to required measurement devices to perform the task at hand. Claim 9 Huang combined with Ramanujam and Blonder teaches all the disclosed limitations of Claim 1 as outlined above. Huang combined with Ramanujam and Blonder does not explicitly teach: The method of claim 1, further comprising: identifying, based on first image data acquired in association with the first set of tasks and second image data acquired in association with the second set of tasks, one or more measurement devices common to the first set of tasks and the second set of tasks However, Ghorbanian-Matloob teaches: The method of claim 1, further comprising: identifying, based on first image data acquired in association with the first set of tasks and second image data acquired in association with the second set of tasks, one or more measurement devices common to the first set of tasks and the second set of tasks GM discloses in [0068], "The one or more system sensors 125 can be configured to generate and/or store data including the sensor data 140 associated with one or more objects that are proximate to the vehicle 105 (e.g., within range or a field of view of one or more of the one or more sensors 125). The one or more autonomy system sensors 125 can include a Light Detection and Ranging (LIDAR) system, a Radio Detection and Ranging (RADAR) system, one or more cameras (e.g., visible spectrum cameras and/or infrared cameras), motion sensors, and/or other types of imaging capture devices and/or sensors. The sensor data 140 can include image data, RADAR data, LIDAR data, and/or other data acquired by the one or more sensors 125. The one or more objects can include, for example, pedestrians, vehicles, bicycles, and/or other objects". Huang does teach: wherein providing the first mission file and the second mission file is based on identifying the one or more measurement devices. We submit that this can work in combination with the recognition of "similar past environmental scenarios and recognize one or more similar past environmental scenarios" disclosed in [0110] of Huang, wherein the logic for "providing" is disclosed in [0253] above, with search results viewable; as this identification is performed on the basis of environmental scenarios associated with past tasks, we are merely expanding our definition of environmental variables to include measurement devices common to carrying out both tasks as taught by GM. One of ordinary skill in the art would have recognized that applying the known technique of Ghorbanian-Matloob et al. would have yielded predictable results and resulted in an improved system for the same reasons as stated above with respect to Claim 8. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Huang (US 2020/0171671 A1) in view of Ramanujam(US 20200101611 A1) in further view of Blonder(US 20220250658 A1) in further view of Gray(US 10795327 B2). Huang combined with Ramanujam and Blonder teaches all of the disclosed limitations of Claim 1, as outlined above. Huang combined with Ramanujam and Blonder does not teach: The method of claim 1, wherein the first set of tasks, the second set of tasks, and the third set of tasks comprise process automation inspection tasks. However, Gray teaches: The method of claim 1, wherein the first set of tasks, the second set of tasks, and the third set of tasks comprise process automation inspection tasks. As Gray discloses in Col 7 ln 12-18, "The control system 114 may also include a mission planning component 126 that may autonomously or in coordination with a remote server 16 generate or update a mission plan and executes the mission plan (such as for the performance of an assigned task) by coordinating the various other components of the control system 114 and the robot 12. With reference to automation, in Gray Col 7 ln 22-29, "Alternatively, in other embodiments, the inspection routine may be largely automated and/or under control of the mission planning functionality and the operator inputs may be generally directed to overriding a planned action (such as based on simulated results as discussed herein) or to providing limited inputs to further performance of a task in a suitable or satisfactory manner". It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Huang combined with Ramanujam and Blonder with Gray’s suggestion to use robots for inspection tasks, as evidenced by Tsalatsanis. Tsalatsanis states that it is known to use robots in place of humans when processes may put humans in danger. (Tsalatsanis -[p.1, col 1, In 8-12] Specifically in industrial applications, such as manufacturing, underground mining, toxic waste cleanup and material storage/handling, where many processes take place in hazardous environments harmful to human health, the choice of robotics -based solutions is justifiable.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THEODORE L XIE whose telephone number is (571)272-7102. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached at 571-272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THEODORE XIE/Examiner, Art Unit 3623 /CHARLES GUILIANO/Primary Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Dec 08, 2022
Application Filed
Apr 18, 2025
Non-Final Rejection — §103
Jul 30, 2025
Response Filed
Sep 02, 2025
Final Rejection — §103
Oct 09, 2025
Request for Continued Examination
Oct 16, 2025
Response after Non-Final Action
Jan 05, 2026
Request for Continued Examination
Feb 04, 2026
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591576
DRILLING PERFORMANCE ASSISTED WITH AN ARTIFICIAL INTELLIGENCE ENGINE
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
1y 7m
Median Time to Grant
High
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month