Prosecution Insights
Last updated: April 19, 2026
Application No. 18/832,501

Workflow Construction and Monitoring Method and System

Final Rejection §103
Filed
Jul 23, 2024
Examiner
WARNER, PHILIP N
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Siemens Aktiengesellschaft
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
65%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
39 granted / 107 resolved
-15.6% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
28 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following FINAL Office Action is in response to Applicant’s communication filed 02/05/2026 regarding Application 18/832,501. Status of Claim(s) Claim(s) 1-11 is/are currently pending and are rejected as follows Response to Arguments – 103 Rejection Applicant’s arguments in regards to the previously applied 103 rejections have been fully considered and are not deemed persuasive. Applicant argues that the previously applied art of Versace does not disclose the use of a data block, function block, or other structures as disclosed in Applicant’s claims. Examiner disagrees as the art of Versace recites the creation and use of hardware agnostic ‘brains’ that are shown in Figures 3A-B as various ‘blocks’ of information including that of stimuli, and responses. In paragraph 6 Versace discloses that stimuli and response ‘blocks’ can be defined, selected, edited, and saved. These are then constituted into brains that allow for flows of information to allow decision making. The stimuli, response, and other structures in Versace are deemed to be equivalent to data, function, and other blocks as recited in Applicant’s claims both in view of the specification and under broadest reasonable interpretation by one of ordinary skill in the art. It is for these reasons the claims are still deemed rejected under the previously applied prior art rejection. Further elaboration and citations are given in the amended prior art rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Versace (US 2017/0076194 Al) in view of Crawford (US 2009/0064053 Al). Claim(s) 1, 10, and 11 – Versace discloses the following limitations: At least one memory storing a computer-readable code (Versace: Paragraph 42, "The platform 100 includes a user interface 102 that enables a user to define a hardware-agnostic brain, a processor 104 that implements the hardware-agnostic brain (which may include processes and programs implementing Artificial Intelligence (AI)/Artificial Neural Network (ANN)/Deep Neural Network (DNN) processing), a memory 103 to store instructions for defining and executing the hardware-agnostic brain (including instructions implementing AI/ ANN/DNN and synaptic weights defining ANN/DNN structures), and a communications interface 105 for communicating with the robot 106. The user interface 102 allows the user to create actionable tasks and/or usable workflows for the robot 106. The platform 100 interprets and implements these workflows as a hardware-agnostic brain 104 that interprets data from the robot 106 and input entered via the user interface 102, then performs one or more corresponding actions. The platform 100 can be implemented in any suitable computing device, including but not limited to a tablet computer (e.g., an iPad), a smartphone, a single-board computer, a desktop computer, a laptop, either local or in the cloud, etc. The platform 100 may provide the user interface 102 as a Graphical User Interface (GUI) via a touchscreen or other suitable display.") At least one processor to call the computer-readable code (Versace: Paragraph 42, "The platform 100 includes a user interface 102 that enables a user to define a hardware-agnostic brain, a processor 104 that implements the hardware-agnostic brain (which may include processes and programs implementing Artificial Intelligence (AI)/Artificial Neural Network (ANN)/Deep Neural Network (DNN) processing), a memory 103 to store instructions for defining and executing the hardware-agnostic brain (including instructions implementing AI/ ANN/DNN and synaptic weights defining ANN/DNN structures), and a communications interface 105 for communicating with the robot 106. The user interface 102 allows the user to create actionable tasks and/or usable workflows for the robot 106. The platform 100 interprets and implements these workflows as a hardware-agnostic brain 104 that interprets data from the robot 106 and input entered via the user interface 102, then performs one or more corresponding actions. The platform 100 can be implemented in any suitable computing device, including but not limited to a tablet computer (e.g., an iPad), a smartphone, a single-board computer, a desktop computer, a laptop, either local or in the cloud, etc. The platform 100 may provide the user interface 102 as a Graphical User Interface (GUI) via a touchscreen or other suitable display.") A node library with behavior tree nodes for constructing behavior trees and various types of data blocks, the behavior tree comprising: function block nodes, each function block node correspondingly implementing a service operation, each data block being configured to present corresponding data in the service operation of the function block node connected thereto (Versace: Paragraph 92, "FIGS. 9A and 9B show a dedicated GUI display screen 900 that provides part of the "Configure" component. It appears if the user selects the "add brain to robot" button 704 on the navigation page 700. The screen 900 shows several icons representing various GUI functionalities including an "add brain" button 902 and buttons associated with previously defined brains, shown in FIGS. 9A and 9B as "888" brain 904a, "AllDisplays" brain 904b, "AudioResponse Test" brain 904c, and "Button Stimulus" brain 904d (collectively, previously defined brains 904). The previously defined brains 904 can be created and stored locally by the user or accessed or downloaded via a cloud resource, such as a "brain store" or a free sharing framework. Screen 900 also includes a search input 906 that enables to the user to search for a particular brain and/or filter brains by name, robot type, or other brain characteristic. Brains can be "swapped" via a GUI on the iOS device as described below."; Paragraph 93, "Each brain (including each previously defined brain 904) may have an xml representation that can be shared across one or more devices (robots) simultaneously, sequentially, or both simultaneously and sequentially. For instance, a particular brain can be swapped among robots and/or transmitted to multiple robots via a GUI executing on a iOS device, Android device, or other suitable computing device."; Paragraph 94, "The user can apply one brain to many robots, one brain to many different types of robots, and/or many brains to one robot via screen 900 without having to know or understand the specifics of the brain commands, the robots' capabilities, or how to program the robots. If the user selects a brain that is incompatible with the selected robot, the GUI may present a message warning of the incompatibilities. For example, if the selected robot is a ground robot and the brain includes a behavior for a UA V, such as a "Fly Up" command, the system warns the user that the brain and/or its behavior(s) has one or more incompatibilities with the selected robot."; Paragraph 100, "FIGS. l lA-1 lE show a Behavior Editor 1100 that enables viewing, adding, editing, and deleting of behaviors 10 for use in creating and editing brains 40. As shown in FIG. l lA, available stimuli 20 are represented on a stimulus panel 1120 and available responses 30 are represented on a response panel 1130 displayed on either side of the behavior 10 being created/edited. The available stimuli 20 and responses 30 may be retrieved from a library stored in a local or cloud-based memory."; Paragraph 127, "The library, which may be stored in a memory or database, that handles generalizing across robotic structures has to make specific effort to abstract away the heterogeneous communication protocols. Each of these communication protocols has their own set of inherent properties. For example, UDP is connectionless and tends to be unreliable while TCP is connection-based and tends to be reliable. To abstract away these differences while maintaining a single API for all robots, helper objects are provided in the library to add some of those properties to communication protocols that don't have them inherently. For example, there is a reliable UDP stream to allow us to use communication paradigms that require reliability. This allows us to treat heterogeneous communication protocols as functionally similar which provides more flexibility for what algorithms can be used on robots.") generating a behavior tree corresponding to a workflow on the basis of a behavior tree construction operation entered by a user on a graphical user interface, the behavior tree comprising function block nodes, each function block node correspondingly implementing a service operation (Versace: Paragraph 42, "The platform 100 includes a user interface 102 that enables a user to define a hardware-agnostic brain, a processor 104 that implements the hardware-agnostic brain (which may include processes and programs implementing Artificial Intelligence (AI)/Artificial Neural Network (ANN)/Deep Neural Network (DNN) processing), a memory 103 to store instructions for defining and executing the hardware-agnostic brain (including instructions implementing AI/ ANN/DNN and synaptic weights defining ANN/DNN structures), and a communications interface 105 for communicating with the robot 106. The user interface 102 allows the user to create actionable tasks and/or usable workflows for the robot 106. The platform 100 interprets and implements these workflows as a hardware-agnostic brain 104 that interprets data from the robot 106 and input entered via the user interface 102, then performs one or more corresponding actions. The platform 100 can be implemented in any suitable computing device, including but not limited to a tablet computer (e.g., an iPad), a smartphone, a single-board computer, a desktop computer, a laptop, either local or in the cloud, etc. The platform 100 may provide the user interface 102 as a Graphical User Interface (GUI) via a touchscreen or other suitable display."; Paragraph 54, "FIGS. 3A-6 illustrate fundamental building blocks for defining hardware-agnostic robot brains. As described in greater detail below, each brain comprises one or more behaviors. Each behavior, in tum, comprises one or more stimuli and one or more responses. The stimuli may represent changes in environmental parameters that the robot can detect with on-board or networked sensors; the status of the robot's internal and external components; and commands and signals received from other sources. The responses represent actions that the robot can take. Together, the stimuli and actions that make up a particular behavior specify how the robot works and functions."; Paragraph 102, "Stimuli can be linked by AND/OR logical conditions. Types of stimuli include but are not limited to: user input, such as touchscreen swipes, tilts, button pushes, etc.; machine vision (e.g., OpenCV), AI/ANN/DNN-related input (e.g., color, motion, face, object, and/or scene detection, robot-generated map); and quantitative sensor readings as well as device status from robot or controlling device, e.g. an iPad (e.g., WiFi signal strength and time of day). In some implementations there may be sub-dialogs for settings ( e.g., at what battery level should a stimulus be activated). The setting may be displayed without the need to open the sub-dialog, or the user may open the sub-dialog for editing. Machine vision stimuli may include selection of particular colors the robot can detect to generate a response. Other implementations can include objects, people, scenes, either stored in the knowledge base of the robot, objects the user has trained the brain to recognize, objects that have been trained by other users, object learned by other robots, or knowledge bases available in cloud resources."; Paragraph 121, "FIG. 14 shows a process layout for constructing a generalized API 70 suitable for interfacing between a robot or robot-specific API and a hardware-agnostic robot brain. This generalized API 70 is constructed in four process layers. In some embodiments, a single block is taken from Layer 1 of the process layout of API 70 that represents choosing a specific robot 72. In some embodiments, one or more blocks can be taken from Layer 2, as this is the step that configures hardware capabilities 74 of the chosen robot 72. Layer 3 is determined by the robot's movement capabilities 76, such as for example whether the robot 72 is a ground based robot, a UAV or a UUV. The final process step for Layer 4 is added for all robots as general commands 78, regardless of the selections and/or combinations of the previous process layers.") adding and connecting a data block to each function block node in the behavior tree, each data block configured to present a corresponding data in the service operation of the function block connected thereto (Versace: Paragraph 42, "The platform 100 includes a user interface 102 that enables a user to define a hardware-agnostic brain, a processor 104 that implements the hardware-agnostic brain (which may include processes and programs implementing Artificial Intelligence (AI)/Artificial Neural Network (ANN)/Deep Neural Network (DNN) processing), a memory 103 to store instructions for defining and executing the hardware-agnostic brain (including instructions implementing AI/ ANN/DNN and synaptic weights defining ANN/DNN structures), and a communications interface 105 for communicating with the robot 106. The user interface 102 allows the user to create actionable tasks and/or usable workflows for the robot 106. The platform 100 interprets and implements these workflows as a hardware-agnostic brain 104 that interprets data from the robot 106 and input entered via the user interface 102, then performs one or more corresponding actions. The platform 100 can be implemented in any suitable computing device, including but not limited to a tablet computer (e.g., an iPad), a smartphone, a single-board computer, a desktop computer, a laptop, either local or in the cloud, etc. The platform 100 may provide the user interface 102 as a Graphical User Interface (GUI) via a touchscreen or other suitable display."; Paragraph 54, "FIGS. 3A-6 illustrate fundamental building blocks for defining hardware-agnostic robot brains. As described in greater detail below, each brain comprises one or more behaviors. Each behavior, in tum, comprises one or more stimuli and one or more responses. The stimuli may represent changes in environmental parameters that the robot can detect with on-board or networked sensors; the status of the robot's internal and external components; and commands and signals received from other sources. The responses represent actions that the robot can take. Together, the stimuli and actions that make up a particular behavior specify how the robot works and functions."; Paragraph 102, "Stimuli can be linked by AND/OR logical conditions. Types of stimuli include but are not limited to: user input, such as touchscreen swipes, tilts, button pushes, etc.; machine vision (e.g., OpenCV), AI/ANN/DNN-related input (e.g., color, motion, face, object, and/or scene detection, robot-generated map); and quantitative sensor readings as well as device status from robot or controlling device, e.g. an iPad (e.g., WiFi signal strength and time of day). In some implementations there may be sub-dialogs for settings ( e.g., at what battery level should a stimulus be activated). The setting may be displayed without the need to open the sub-dialog, or the user may open the sub-dialog for editing. Machine vision stimuli may include selection of particular colors the robot can detect to generate a response. Other implementations can include objects, people, scenes, either stored in the knowledge base of the robot, objects the user has trained the brain to recognize, objects that have been trained by other users, object learned by other robots, or knowledge bases available in cloud resources."; Paragraph 121, "FIG. 14 shows a process layout for constructing a generalized API 70 suitable for interfacing between a robot or robot-specific API and a hardware-agnostic robot brain. This generalized API 70 is constructed in four process layers. In some embodiments, a single block is taken from Layer 1 of the process layout of API 70 that represents choosing a specific robot 72. In some embodiments, one or more blocks can be taken from Layer 2, as this is the step that configures hardware capabilities 74 of the chosen robot 72. Layer 3 is determined by the robot's movement capabilities 76, such as for example whether the robot 72 is a ground based robot, a UA Vora UUV. The final process step for Layer 4 is added for all robots as general commands 78, regardless of the selections and/or combinations of the previous process layers.") Examiner interprets the stimuli, responses, and final combined blocks to the equivalent to the data and function block nodes both under broadest reasonable interpretation and in view of Applicant’s specification. Versace does not explicitly disclose the following, however, in analogous art of workflow creation, Crawford discloses the limitations below: parsing the behavior tree connected with data block, deploying the workflow corresponding to the behavior tree to a runtime of a corresponding workcell so that the runtime executes the workflow to control each resource in the workcell to execute the service operation according to the workflow, and providing corresponding data obtained in an execution process of the service operation to the corresponding data block for display (Crawford: Paragraph 19, "In some implementations, the data can correspond to frequency information. For example, the frequency information can represent the most executed paths. The frequency information can also represent the least executed paths. In some implementations, the dataset can be stored in at least one file or database. In some other implementations, the data can be generated, for example, as a result of a simulation."; Paragraph 83, "Compute the leveled DAG that represents the same decision logic as the given DAG, but which uses a specific leveling requested by the user;"; Paragraph 90, "A user can also select a subset of nodes from a decision tree based on frequency information. This information can be retrieved from a data store, such as a log file or a database. For example, the frequency information can indicate that one path of the decision tree is leading to a specific action more frequently than any other path. In one variation, only that portion of the tree that leads to a specific action more frequently than any other portion can be visualized."; Paragraph 142, "In one variation, a software component responsible for generating action graphs can parse the entire decision tree to calculate the nodes that should be included in the action graph. For example, the software component can include a module configured for selecting a subset of paths that lead to a particular node within the decision tree. In one variation, this module can select the nodes by parsing the entire tree, starting from every available root node and navigating through all possible paths within the tree. That software module can maintain a list of all paths. In one variation, the software module responsible for paths selection can iterate through all paths and, from the collection of these paths, select only those that lead to the selected node. In this variation, the parsing of the tree can be referred to as "top-down.""; Paragraph 155, "The visualization approaches all serve to make the paths of interest more visually noticeable and easy to follow than the rest of the paths as shown in FIG. 15. In some variations, one can choose to make the least executed paths be highlighted in this same way. With such an approach, the user could possibly identify those pieces of decision logic that were implemented incorrectly or inefficiently (since they rarely get executed) and, in doing so, possibly make edits so as to make the resulting decision logic more efficient.") Versace discloses a method for building decision trees for workflow execution. Crawford discloses a method for visualizing, analyzing, and generating workflows. At the time of Applicant's filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Versace with the teachings of Crawford in order to improve the clarity and ease of analyzing workflows as disclosed by Crawford (Crawford: Paragraph 134, "In other words, the user can just examine the logic corresponding to a subset of the population that is assigned a particular action, and while doing so, can ignore all the members of the population that are assigned other actions.") Claim(s) 2 – Versace and Crawford disclose the limitations of claims 1 Versace further discloses the following: a data block label for indicating a type of the data block (Versace: Paragraph 102, "Stimuli can be linked by AND/OR logical conditions. Types of stimuli include but are not limited to: user input, such as touchscreen swipes, tilts, button pushes, etc.; machine vision (e.g., OpenCV), AI/ANN/DNN-related input (e.g., color, motion, face, object, and/or scene detection, robot-generated map); and quantitative sensor readings as well as device status from robot or controlling device, e.g. an iPad (e.g., WiFi signal strength and time of day). In some implementations there may be sub-dialogs for settings ( e.g., at what battery level should a stimulus be activated). The setting may be displayed without the need to open the sub-dialog, or the user may open the sub-dialog for editing. Machine vision stimuli may include selection of particular colors the robot can detect to generate a response. Other implementations can include objects, people, scenes, either stored in the knowledge base of the robot, objects the user has trained the brain to recognize, objects that have been trained by other users, object learned by other robots, or knowledge bases available in cloud resources."; Paragraph 104, "As shown in FIGS. l lA-1 lE, responses 30 are depicted as triangles arranged sequentially (in a line) and selectable from the sliding panel 1130 on the right. Once conditions to satisfy a stimulus ( or multiple stimuli, e.g., three stimuli arranged in AND statements) are met, one or more responses are triggered. These responses can be executed sequentially, and while being executed, other stimuli processing can be prevented from gaining access to other stimuli using a scheduler as explained below with respect to FIGS. ISA and 15B. In other implementations, the sequence of responses can be broken by intervening stimuli. Responses are converted from robot-agnostic to robot-specific in software as explained below with respect to FIG. 14. For example, a "Move forward for 2 meters" on a ground robot, and the same command on a drone, will result in two very different set of motor commands for a robot, which will need to be handled in software to achieve equivalent behavioral results.") a data block body with a display area for presenting specific data (Versace: Paragraph 105, "Responses 10 can include changing the status of the display of the robot (when available), specific movement of the robot, sounds (e.g., speech), tilt/rotations of the robot, picture/video, turning on/off lights (e.g., LED), pausing the robot, drone-specific operations (e.g., take off). In this example, available responses include display 30a)(e.g., if the robot has a screen, it can be a picture/video/image on the screen, color, text, etc.), light 30b (e.g., turn on a light­emitting diode (LED)), move 30c (e.g., trigger a walking or rolling motor), sound 30d (e.g., record sound with a microphone or emit sound with a speaker), tilt 30e (e.g., with an appropriate tilt actuator), drone 30f (e.g., fly in a certain direction, speed, or altitude), camera 30g (e.g., acquire still or video images with an image sensor), and pause 30h (e.g., stop moving). Additionally, custom actions can be available from the cloud, an on-line store, or other users."; Paragraph 107, "Stimuli 20 and responses 30 can be re-arranged by dragging and dropping in the interface 1100, and a specific response can formed by the user recording specific movement by the robot performed under the control of the user, and saved as custom movements. For example, in FIGS. l lA and l lB, the user selects and adds the location stimulus 20a and the vision stimulus 20d to the behavior 10 and possibly adjust the parameters of these stimuli 20 using the stimuli/response editor 1140. In FIG. l lC, the user adds a Move Response 30c to the behavior 10 and selects the direction and duration of the movement using the stimuli/response editor 1140. FIG. l lD shows that the user has added the image acquisition response 30g and the Drone Response 30f, which enables the use to select take off or land using the stimuli/response editor 1140. And FIG. l lE shows a Save button 1150 that enables the user to name and save the behavior 10, which can then be used in a brain 40, exported, exchanged, and posted on a cloud server store."; Paragraph 110, "In general, the interface 1200 may enable use of a dial format and/or swipe mode on a single screen. For instance, dials may provide indications of possible robot actions and/or easily recognizable symbols or icons ( e.g., in addition to or instead of text). The user interface may give the user the ability to playback a behavior via button press, to show and/or hide a heads-up display (HUD), and/or to customize a HUD. In some implementations, supported controls may include but are not limited to: two-dial control; swipe control; two-dial control and swipe control on the same screen; tilt control (e.g., using the iPad sensors, move the robot in the direction of a device tilt); and voice commands. For swipe control, the robot may move in the direction of the swipe and may continue moving until the user lifts his or her swiping finger. The interface may enable the user to create a pattern, by swiping, for the robot to follow. (In some implementations the interface may show a trail on the screen in the direction of the swipe.) Similarly, vertical flying control altitude may utilize two finger gestures. Similarly, voice commands may encompass a plurality of actions. Other commands may include: device-type commands ( e.g., forward, stop, right, left, faster), pet-related commands (e.g., come, heel), and other commands (e.g., wag, to move the iPhone in a Romotive Romo back and forth or to roll an Orbotix Sphero back and forth).") Claim(s) 3 – Versace in view of Crawford disclose the limitations of claims 1-2 Versace does not explicitly disclose the following, however, in analogous art of generating workflows Crawford teaches the limitations below: wherein the size of the display area is adjustable (Crawford: Paragraph 91, "In some implementations, a software component can display the control within a graphical user interface to a user. For example, the control can be implemented as a Windows graphical user interface ("GUI") control designed to display decision tree structures."; Paragraph 92, "In some variations this control can provide a set of application programming interfaces (APis) allowing software developers to modify various attributes of this control. Some attributes can include parameters such as background color, font type, font size, etc. The control can further be embedded in an application. The control can retrieve data necessary for displaying of a tree from a database and/or from a text file."; Paragraph 93, "A decision tree structure, in one variation, can be implemented as a plurality of graphical user interface elements corresponding to linked nodes within a hierarchical structure. Each node can have its own collection of properties controlling its size, shape, color and appearance of the text within nodes (i.e. font, size, alignment, etc.).") Versace discloses a method for building decision trees for workflow execution. Crawford discloses a method for visualizing, analyzing, and generating workflows. At the time of Applicant's filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Versace with the teachings of Crawford in order to improve the clarity and ease of analyzing workflows as disclosed by Crawford (Crawford: Paragraph 134, "In other words, the user can just examine the logic corresponding to a subset of the population that is assigned a particular action, and while doing so, can ignore all the members of the population that are assigned other actions.") Claim(s) 4 – Versace in view of Crawford disclose the limitations of claims 1-2 Versace further discloses the following: wherein the type of the data block comprises some or all of data pair, datasheet, image, video, and chart (Versace: Paragraph 62, "FIG. SB shows an example of how behaviors (neurons) in a brain can be connected to each other or "chained together." In this case, a first behavior 51 Oa produces a particular output ( e.g., motion in a particular direction or to particular GPS coordinates) that serves as a stimulus 520a for a second behavior 510b. If the second behavior's other stimulus 520b (e.g., recognition of a particular object in imagery acquired by a camera) is present, then the second behavior 510b produces its own response 530, which in turn may stimulate another behavior. Thus, triggering of the first behavior 51 0a is a stimulus for a second behavior Slob."; Paragraph 135, "Additionally, the robotic brain may be configured with an arbitrary number of behaviors (e.g., pairs of stimulus/response sets 160). Behaviors can be created and edited by the user based on stimuli/responses defined above ( e.g., stimuli directly based on reading and preprocessing of robot sensors). They can also be chosen from a collection of stimuli/responses directly generated by machine vision ( e.g., Open CV) AI/ANN/DNN algorithms in the sensory objects module 110 and navigation modules 130. For example, a particular behavior can be defined to include predetermined stimuli, such as time of the day (e.g., it's 2:00 PM as determined by the Robot processor, or the controlling cell phone), or a stimulus learned by the sensory system 110 (e.g. what "John" looks like). Similarly, a response associated with the behavior and executed in response to the stimulus can be defined from the navigation system 130 as "go to the kitchen." The resulting behavior would cause the robot to go to the kitchen (as learned by the navigation system) when the robot sees John, as learned by elaborating video/audio sensory information and/or other signals (e.g., wireless signals originating from John's phone).") Claim(s) 5 – Versace in view of Crawford disclose the limitations of claim 1 Versace further discloses the following: the workflow is an OT domain workflow (Versace: Paragraph 130, "The brains (collection of behaviors) described herein can be combined and associated with other forms of autonomous behaviors, such as autonomous sensory object recognition (such as but not limited to audition, vision, radio signals, LIDAR, or other point-cloud input, as well as any combination of the above sensors), in at least the following ways."; Paragraph 135, "Additionally, the robotic brain may be configured with an arbitrary number of behaviors (e.g., pairs of stimulus/response sets 160). Behaviors can be created and edited by the user based on stimuli/responses defined above ( e.g., stimuli directly based on reading and preprocessing of robot sensors). They can also be chosen from a collection of stimuli/responses directly generated by machine vision ( e.g., Open CV) AI/ANN/DNN algorithms in the sensory objects module 110 and navigation modules 130. For example, a particular behavior can be defined to include predetermined stimuli, such as time of the day (e.g., it's 2:00 PM as determined by the Robot processor, or the controlling cell phone), or a stimulus learned by the sensory system 110 (e.g. what "John" looks like). Similarly, a response associated with the behavior and executed in response to the stimulus can be defined from the navigation system 130 as "go to the kitchen." The resulting behavior would cause the robot to go to the kitchen (as learned by the navigation system) when the robot sees John, as learned by elaborating video/audio sensory information and/or other signals (e.g., wireless signals originating from John's phone)."; Paragraph 137, "In the example instantiation in FIG. ISA, at least two autonomy modules 110, 120 and several user-defined behaviors 160 can control robotic effectors via the scheduler 140. For example, the sensory module 110 could command the robot to make a camera movement to learn more about an object visual appearance with a right movement of the robot, the navigation system 130 may command the robot to explore the environment with a left movement of the robot, and the behavior 160 may command the robot to go backward following the appearance of a soccer ball. As an example instantiation, the scheduler can use a neural-like competitive cueing network ( or ANN, or DNN) to appropriately sequence actions based on their relative importance and timing.") the workcell is a workcell in an OT domain (Versace: Paragraph 130, "The brains (collection of behaviors) described herein can be combined and associated with other forms of autonomous behaviors, such as autonomous sensory object recognition (such as but not limited to audition, vision, radio signals, LIDAR, or other point-cloud input, as well as any combination of the above sensors), in at least the following ways."; Paragraph 135, "Additionally, the robotic brain may be configured with an arbitrary number of behaviors (e.g., pairs of stimulus/response sets 160). Behaviors can be created and edited by the user based on stimuli/responses defined above ( e.g., stimuli directly based on reading and preprocessing of robot sensors). They can also be chosen from a collection of stimuli/responses directly generated by machine vision ( e.g., Open CV) AI/ANN/DNN algorithms in the sensory objects module 110 and navigation modules 130. For example, a particular behavior can be defined to include predetermined stimuli, such as time of the day (e.g., it's 2:00 PM as determined by the Robot processor, or the controlling cell phone), or a stimulus learned by the sensory system 110 (e.g. what "John" looks like). Similarly, a response associated with the behavior and executed in response to the stimulus can be defined from the navigation system 130 as "go to the kitchen." The resulting behavior would cause the robot to go to the kitchen (as learned by the navigation system) when the robot sees John, as learned by elaborating video/audio sensory information and/or other signals (e.g., wireless signals originating from John's phone)."; Paragraph 137, "In the example instantiation in FIG. ISA, at least two autonomy modules 110, 120 and several user-defined behaviors 160 can control robotic effectors via the scheduler 140. For example, the sensory module 110 could command the robot to make a camera movement to learn more about an object visual appearance with a right movement of the robot, the navigation system 130 may command the robot to explore the environment with a left movement of the robot, and the behavior 160 may command the robot to go backward following the appearance of a soccer ball. As an example instantiation, the scheduler can use a neural-like competitive cueing network ( or ANN, or DNN) to appropriately sequence actions based on their relative importance and timing.") the resource is an OT resource (Versace: Paragraph 130, "The brains (collection of behaviors) described herein can be combined and associated with other forms of autonomous behaviors, such as autonomous sensory object recognition (such as but not limited to audition, vision, radio signals, LIDAR, or other point-cloud input, as well as any combination of the above sensors), in at least the following ways."; Paragraph 135, "Additionally, the robotic brain may be configured with an arbitrary number of behaviors (e.g., pairs of stimulus/response sets 160). Behaviors can be created and edited by the user based on stimuli/responses defined above ( e.g., stimuli directly based on reading and preprocessing of robot sensors). They can also be chosen from a collection of stimuli/responses directly generated by machine vision (e.g., OpenCV) AI/ANN/DNN algorithms in the sensory objects module 110 and navigation modules 130. For example, a particular behavior can be defined to include predetermined stimuli, such as time of the day (e.g., it's 2:00 PM as determined by the Robot processor, or the controlling cell phone), or a stimulus learned by the sensory system 110 ( e.g. what "John" looks like). Similarly, a response associated with the behavior and executed in response to the stimulus can be defined from the navigation system 130 as "go to the kitchen." The resulting behavior would cause the robot to go to the kitchen (as learned by the navigation system) when the robot sees John, as learned by elaborating video/audio sensory information and/or other signals (e.g., wireless signals originating from John's phone)."; Paragraph 137, "In the example instantiation in FIG. ISA, at least two autonomy modules 110, 120 and several user-defined behaviors 160 can control robotic effectors via the scheduler 140. For example, the sensory module 110 could command the robot to make a camera movement to learn more about an object visual appearance with a right movement of the robot, the navigation system 130 may command the robot to explore the environment with a left movement of the robot, and the behavior 160 may command the robot to go backward following the appearance of a soccer ball. As an example instantiation, the scheduler can use a neural-like competitive cueing network ( or ANN, or DNN) to appropriately sequence actions based on their relative importance and timing.") Claim(s) 6 – Versace in view of Crawford disclose the limitations of claims 1 and 5 Versace further discloses the following: generating a microservice on the basis of the behavior tree, so that an IT device triggers a runtime of the workcell to execute the OT domain workflow by calling the microservice (Versace: Paragraph 131, "FIG. ISA depicts one example application where a (real or virtual) robot 100 is controlled by at least two autonomy modules (a sensory object module 110 and a motivation module 120) and several user-defined behaviors 160. The robot implements the autonomous, user-defined behaviors using various on-board sensors (e.g., cameras, accelerometer, gyro, IR, etc) that form a robot sensory system I 00 and actuators/effectors (e.g., motors in tracks/propellers). The robot may also be linked to other sensors on associated hardware (e.g., a cell phone mounted on the robot, or a controlling iPad) that provide sensory input to an artificial robotic brain executing machine vision ( e.g., Open CV), AI, ANN, and DNN algorithms. These algorithms may be executed by:"; Paragraph 135, "Additionally, the robotic brain may be configured with an arbitrary number of behaviors (e.g., pairs of stimulus/response sets 160). Behaviors can be created and edited by the user based on stimuli/responses defined above ( e.g., stimuli directly based on reading and preprocessing of robot sensors). They can also be chosen from a collection of stimuli/responses directly generated by machine vision ( e.g., Open CV) AI/ANN/DNN algorithms in the sensory objects module 110 and navigation modules 130. For example, a particular behavior can be defined to include predetermined stimuli, such as time of the day (e.g., it's 2:00 PM as determined by the Robot processor, or the controlling cell phone), or a stimulus learned by the sensory system 110 (e.g. what "John" looks like). Similarly, a response associated with the behavior and executed in response to the stimulus can be defined from the navigation system 130 as "go to the kitchen." The resulting behavior would cause the robot to go to the kitchen (as learned by the navigation system) when the robot sees John, as learned by elaborating video/audio sensory information and/or other signals (e.g., wireless signals originating from John's phone)."; Paragraph 138, "FIG. 15B is a flowchart that illustrates an example scheduling process executed by the scheduler 140. The scheduler 140 starts by sorting its inputs and then computing the relative weight of each input. Inputs can come from a variety of sources, including on-board sensors 100, off-board sensors, and user inputs, and the scheduler can scale from having one input to many. Generally speaking, the input sources include modules currently executing algorithms ( e.g., the navigation module 130), the motivation of the robot (motivation module 120), command packets coming from a controller, and the currently executing brain (behaviors 160). Inputs with the highest weight execute, while inputs with lower weights that do not conflict with other inputs execute if they pass through a series of checks."; Paragraph 142, "Beyond the basic series of weights, the scheduler 140 also executes one or more sorting steps. The first step involves sorting commands that use discrete hardware resources from commands that affect things like settings and parameter adjustment ( operation 854). Settings changes are parsed and checked for conflict ( operation 856). If there are no conflicts, then all settings changes push (operation 858). If there are conflicts and there are weights that can be used to break the conflict, they are used. If everything is weighted identically and two settings conflict, than neither executes or a symmetry­breaking procedure may be applied (e.g., most-used behavior wins). Many of these settings packets can be executed simultaneously. Next, the packets that affect discrete system resources are further sorted based on the affected resource(s) ( operation 860). Commands that can inherently affect each other but don't necessarily do so are kept together. For example, audio playback and audio recording may be kept in the same stream, as certain devices cannot record and playback and even if the option is available there are still constraints to deal with to avoid feedback."; Paragraph 146, "The scheduling process of FIG. 15B allows the robot to look for an object in the environment, then step backward as required by the user-defined behavior, then go on exploring the environment. In order to determine the relative importance of actions, the scheduler 140 may use the graphical placement of behaviors in the brain to determine the relative importance of each behavior in the brain. In other implementations, a user may be able to provide positive and/or negative reinforcement (e.g., during a training process with the robot) in order to train the robot to develop an understanding of which behaviors and/or responses to prioritize over others. In another implementation, an ANN/DNN autonomously prioritizes scheduling based on learning and experience. In another implementation, the user may manually define the importance of each behavior, e.g., determining which behavior gets the precedence over other behaviors when both behaviors comprise stimuli which would activate their two different sets of responses in reaction a single event. For example, when an image contains two stimuli ( e.g., a face and the color red) which simultaneously activate two sets of responses, the user may manually pre­determine when behavior 1 is engaged and when behavior 2 may be performed if behavior 1 is not complete (e.g., the user may indicate that behavior 2 may interrupt behavior 1, may start after behavior 1 completes, and/or the like).") Claim(s) 7 – Versace in view of Crawford disclose the limitations of claims 1 and 5-6 Versace does not explicitly disclose the following, however, in analogous art of generating workflows Crawford teaches the limitations below: providing the corresponding data in the executing process of the service operation to the corresponding data block for display directly, or, providing the corresponding data obtained in the process of the service operation to the corresponding data block for display through the microservice (Crawford: Paragraph 115, "To simplify the analysis of the logic leading to a particular node, it can be beneficial to highlight only a portion of the decision tree or graph. In the EDAG 300 illustrated in FIG. 3, a user can highlight parent nodes leading to the action node "Monitor Report" 330. This can be accomplished by selecting the node 330, choosing a visualization type and requesting that the software application perform the visualization action. The result of this request can be EDAG 400 illustrated in FIG. 4. Note that for the purposes of visualizing, a user can select any node, not only the action-type node as demonstrated in this example. In some implementations, the user can highlight:"; Paragraph 129, "In the example of the graph 600 in FIG. 6, a scenario is illustrated in which the user has selected the rightmost action node Monitor Queue 620 and has chosen to highlight ancestor nodes of the currently selected node 620 by visibility. In this arrangement, the nodes that are not ancestors of the currently selected node are not displayed at all, and the structure that is displayed is reoriented in a more visually appealing form, causing it to stand out in sharp focus."; Paragraph 134, "A user can also generate "action graphs." An action graph represents a set of nodes connected by links that visually describe the population subset that is assigned a particular action by the decision logic. An action graph allows the analyst to see and understand the conditions for assigning one of the decision logic's actions in isolation from the conditions for assigning all the other decision logic's actions. Often the decision logic is very complex, as a result, it can be helpful to focus on only a portion of that logic. Action graphs can be used to subdivide the logic according to the action assigned. In other words, the user can just examine the logic corresponding to a subset of the population that is assigned a particular action, and while doing so, can ignore all the members of the population that are assigned other actions."; Paragraph 156, "FIG. 16 provides one example of a user interface that can be used for visualization of the same decision logic in alternative forms: as a tree, DAG, EDAG, or set of action graphs. Specifically, FIG. 16 displays the user interface 1600 providing a leveled action graph 1617 using the zebra display. The user interface 1600 also provides a set of buttons allowing the user to switch between various logically equivalent forms of the decision logic representation. For example, the button 1610 can be used to display the decision logic in an unleveled tree form. The button 1620 can be used to display the decision logic in the leveled tree form. The button 1630 can be used to display the decision logic in the DAG form. The button 1640 can be used to display the decision logic in the EDAG form. The button 1650 can be used to display a part of the decision logic in the action graph form."; Paragraph 158, "FIG. 17 illustrates one example of a zebra interface displaying the leveled DAG structure 1700. The leveled DAG structure 1700 has the start node 1710, connected through a plurality of intermediary nodes to the three action nodes 1730. The plurality of the intermediary nodes are displayed using the zebra display 1720. The zebra display 1720 simplifies the visualization of the DAG structure 1700 by displaying an identical condition variable at each level of the tree. Because the condition variable in every node of a level is the same, one can use the condition variable to label the whole level rather than each node. As illustrated in FIG. 17, the variables can be displayed in a single row. This allows the nodes to be rendered more compactly by labeling them with just the range of the condition.") Versace discloses a method for building decision trees for workflow execution. Crawford discloses a method for visualizing, analyzing, and generating workflows. At the time of Applicant's filed invention, one of ordinary skill in the art would have deemed it obvious to combine the methods of Versace with the teachings of Crawford in order to improve the clarity and ease of analyzing workflows as disclosed by Crawford (Crawford: Paragraph 134, "In other words, the user can just examine the logic corresponding to a subset of the population that is assigned a particular action, and while doing so, can ignore all the members of the population that are assigned other actions.") Claim(s) 8 – Versace in view of Crawford disclose the limitations of claims 1 and 5-6 Versace further discloses the following: registering the microservice on a knowledge middle platform, so that the IT device triggers the runtime of the workcell to execute the OT domain workflow by calling the microservice through the knowledge middle platform (Versace: Paragraph 92, "FIGS. 9A and 9B show a dedicated GUI display screen 900 that provides part of the "Configure" component. It appears if the user selects the "add brain to robot" button 704 on the navigation page 700. The screen 900 shows several icons representing various GUI functionalities including an "add brain" button 902 and buttons associated with previously defined brains, shown in FIGS. 9A and 9B as "888" brain 904a, "AllDisplays" brain 904b, "AudioResponse Test" brain 904c, and "Button Stimulus" brain 904d (collectively, previously defined brains 904). The previously defined brains 904 can be created and stored locally by the user or accessed or downloaded via a cloud resource, such as a "brain store" or a free sharing framework. Screen 900 also includes a search input 906 that enables to the user to search for a particular brain and/or filter brains by name, robot type, or other brain characteristic. Brains can be "swapped" via a GUI on the iOS device as described below."; Paragraph 93, "Each brain (including each previously defined brain 904) may have an xml representation that can be shared across one or more devices (robots) simultaneously, sequentially, or both simultaneously and sequentially. For instance, a particular brain can be swapped among robots and/or transmitted to multiple robots via a GUI executing on a iOS device, Android device, or other suitable computing device."; Paragraph 94, "The user can apply one brain to many robots, one brain to many different types of robots, and/or many brains to one robot via screen 900 without having to know or understand the specifics of the brain commands, the robots' capabilities, or how to program the robots. If the user selects a brain that is incompatible with the selected robot, the GUI may present a message warning of the incompatibilities. For example, if the selected robot is a ground robot and the brain includes a behavior for a UAV, such as a "Fly Up" command, the system warns the user that the brain and/or its behavior(s) has one or more incompatibilities with the selected robot."; Paragraph 118, "FIG. 13 shows an example robot knowledge center. In this example, the knowledge of the robot is divided in people, objects, and places. In this particular example, people and objects views are populated automatically, e.g., by the sensory module 110 (people, objects) and navigation module 130 (places) in the system of FIG. ISA, which is described in greater detail below. The user can select, e.g., via a touch screen on a tablet or a mouse, a specific view of a person or object, and provide a verbal or iconic label. Additionally, the user can take multiple views of an object, person, location (map), or a scene (e.g., multiple views of the kitchen) and group them in a single entity that combines all those views (e.g., several views of "John", or a cup, or "John's kitchen"), e.g. via a drag/drop interface. The user can also edit the map generated by module 130 (places), by providing verbal and iconic ( e.g., color) labels to specific areas of the environment mapped by the robot. These verbally or iconically defined objects, people, and places can be used by the stimulus/response system.") Claim(s) 9 – Versace in view of Crawford disclose the limitations of claims 1, 5-6, and 8 Versace further discloses the following: providing the corresponding data obtained in the execution processes of the service operation to the knowledge middle platform and (Versace: Paragraph 137, "In the example instantiation in FIG. 15A, at least two autonomy modules 110, 120 and several user-defined behaviors 160 can control robotic effectors via the scheduler 140. For example, the sensory module 110 could command the robot to make a camera movement to learn more about an object visual appearance with a right movement of the robot, the navigation system 130 may command the robot to explore the environment with a left movement of the robot, and the behavior 160 may command the robot to go backward following the appearance of a soccer ball. As an example instantiation, the scheduler can use a neural-like competitive cueing network ( or ANN, or DNN) to appropriately sequence actions based on their relative importance and timing."; Paragraph 139, "Each source coming into the scheduler 140 has more than one associated weight that gets combined into a final weight used by the scheduler 140. Each packet received by the scheduler 140 may have a specific weight for its individual command and a global weight provided by the scheduler 140 for that specific input. For example, if the scheduler 140 receives two motor commands from a controller-a first motor command with a global system weight of 0.2 and a specific weight of 0.4 and a second motor command with an global system weight of 0.1 and a specific weight of 0.9-it executes the second motor command as the combined weight of the second motor command is greater than that of the first motor command.") performing, by the knowledge middle platform, processing comprising filtering on the data, and then providing the data block for display directly or through the microservice (Versace: Paragraph 142, "Beyond the basic series of weights, the scheduler 140 also executes one or more sorting steps. The first step involves sorting commands that use discrete hardware resources from commands that affect things like settings and parameter adjustment (operation 854). Settings changes are parsed and checked for conflict ( operation 856). If there are no conflicts, then all settings changes push ( opera ti on 8 5 8). If there are conflicts and there are weights that can be used to break the conflict, they are used. If everything is weighted identically and two settings conflict, than neither executes or a symmetry-breaking procedure may be applied (e.g., most-used behavior wins). Many of these settings packets can be executed simultaneously. Next, the packets that affect discrete system resources are further sorted based on the affected resource(s) (operation 860). Commands that can inherently affect each other but don't necessarily do so are kept together. For example, audio playback and audio recording may be kept in the same stream, as certain devices cannot record and playback and even if the option is available there are still constraints to deal with to avoid feedback."; Paragraph 145, "Different input sources can also communicate to each other and adjust the weights of the other subsystems. For instance, if the motivation system 120 is really interested in navigating, but it wants to navigate in a different direction, it can adjust the weights of the navigation packets being sent into the scheduler 140 by signaling the navigation system 130."; Paragraph 146, "The scheduling process of FIG. 15B allows the robot to look for an object in the environment, then step backward as required by the user-defined behavior, then go on exploring the environment. In order to determine the relative importance of actions, the scheduler 140 may use the graphical placement of behaviors in the brain to determine the relative importance of each behavior in the brain. In other implementations, a user may be able to provide positive and/or negative reinforcement (e.g., during a training process with the robot) in order to train the robot to develop an understanding of which behaviors and/or responses to prioritize over others. In another implementation, an ANN/DNN autonomously prioritizes scheduling based on learning and experience. In another implementation, the user may manually define the importance of each behavior, e.g., determining which behavior gets the precedence over other behaviors when both behaviors comprise stimuli which would activate their two different sets ofresponses in reaction a single event. For example, when an image contains two stimuli ( e.g., a face and the color red) which simultaneously activate two sets of responses, the user may manually pre­determine when behavior I is engaged and when behavior 2 may be performed if behavior I is not complete (e.g., the user may indicate that behavior 2 may interrupt behavior 1, may start after behavior I completes, and/or the like).") Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip N Warner whose telephone number is (571)270-7407. The examiner can normally be reached Monday-Friday 7am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Philip N Warner/Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

Jul 23, 2024
Application Filed
Jul 23, 2024
Response after Non-Final Action
Nov 01, 2025
Non-Final Rejection — §103
Feb 05, 2026
Response Filed
Mar 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596974
MULTI-LAYER ABRASIVE TOOLS FOR CONCRETE SURFACE PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12596984
INFORMATION GENERATION APPARATUS, INFORMATION GENERATION METHOD AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12579490
GENERATING SUGGESTIONS WITHIN A DATA INTEGRATION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12567011
BATTERY LEDGER MANAGEMENT SYSTEM AND METHOD OF BATTERY LEDGER MANAGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12493819
UTILIZING MACHINE LEARNING MODELS TO GENERATE INITIATIVE PLANS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
65%
With Interview (+28.6%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month