DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see p. 9-10, filed 11/21/2025, with respect to the rejection(s) of claim 1, 35 U.S.C. 103 have been fully considered but are not persuasive. Applicant argues on p. 10 that Sanchez and Scholan fail to teach a real-time session defined by user input, and, as a result, fail to teach or suggest switching between one user defined action and another user defined action within the same real-time session. While neither of the above references may explicitly state that the session, or communication between the user and robot, is instantiated by the user, this concept is notoriously old and well-known in the art, as evidenced by Shah (US-20200276708-A1).
Claim Interpretation
Amended claim 1 appears to place particular emphasis on a “session” being defined by user input. Because the specification does not provide details as to how the session is defined via user input, or what constitutes a session, this concept is being interpreted rather broadly. A “session,” in the context of the presently claimed invention, generally refers to a communication link between a number of users, systems, computers, or the like. To promote clarity of the record, it should be noted that Examiner interprets a user-defined session as a session between two or more devices which has been instantiated as the result of a user providing some kind of input to one or more of the devices.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez (US 20210197394 A1) in view of Scholan (US 20230390009 A1) and Shah (US-20200276708-A1).
Claim 1
Sanchez teaches
A method comprising:
receiving user input that specifies:
(i) a real-time session with a robot
(Sanchez - [0019] The present development provides a system (100) implementing a multi-agent BDI architecture to control robotic devices (200), operating in a work area (700), in a concurrent and cooperative manner.
[0065] According to the present development, the system (100) disclosed herein further comprises an interpreting BDI agent (500) through which a user can interact with the system (100), define the series of events (900), control it in an asynchronous manner and receive information in real-time on their execution. According to a preferred embodiment, the interaction between the user and the interpreting BDI agent (500) is intuitively conducted by using natural language, whereby the user does not need specific technical knowledge. The interpreting BDI agent (500) comprises an authoring module (510) that has a user interface (511) through which the user of the system (100) specifies the series of events (900).)
EXAMINER NOTE: The user interacts with the interpreting BDI agent in real time through an interface. Because the interpreting BDI agent communicates with the system in real time, this indicates the existence of a real-time session.
…robot having a plurality of parts,
(Sanchez - [0021] … Finally, the lowest level of abstraction corresponds to the signals that executing BDI agents (300) send to robotic devices (200) to activate their actuators (210).
[0079] In the case of robotic drama, robots can be anthropomorphic, zoomorphic, or otherwise. Actuators usually move parts that resemble arms, legs and fingers, although it is also very common for the main locomotion mechanism to be wheels.
[0043] … For example, two robotic devices synchronize their actions to build a brick wall. The first robotic device is responsible for spreading the cement and the second device is responsible for placing the bricks. When the first robotic device finishes its task, it informs the second robotic device to place the bricks. When the second robotic device finishes placing the bricks, it informs the other device to spread another layer of cement.)
EXAMINER NOTE: Each of the above passages indicates the presence of robots having a plurality of parts.
(ii) code defining a plurality of user-defined actions, including a first action and a second action, and
(Sanchez - [0021] As shown in FIG. 1, the system (100) disclosed herein is characterized in that it comprises one or more executing BDI agents (300) in charge of controlling actions of robotic devices (200) so that it may execute the series of events, a director BDI agent (400) responsible for configuring each executing BDI agent (300) and monitoring the execution of actions they execute, and an interpreting BDI agent (500) through which a user of the system defines the series of events, monitors its status and progress, and sets control instructions. FIGS. 2 to 4 show detailed diagrams of the system (100) wherein each module constituting BDI agents (300, 400, 500), robotic devices (200) and the communication channel (610, 620, 630) are identified.)
EXAMINER NOTE: A "series of events" indicates the presence of at least two events (actions). In the context of a robotic environment, control instructions are conveyed to the robotic device through machine code, which is derived and specified from user input.
(iii) at least one custom reaction that specifies a condition of the first action under which a real-time robotics control layer should initiate a real-time change in behavior involving the second action of the plurality of user-defined actions;
(Sanchez - [0008] … an interpreting BDI agent including an authoring module having a user interface through which system users specify the series of events and control their execution, a translation module translating the series of events to the graph, and a monitoring module of the execution of the series of events.
[0043] Thus, according to a preferred embodiment of the invention, the graph (910) is implemented through an extension of Petri Nets with Active Transitions. … The active transition nodes (AT) are used to synchronize actions in the series of events (900). This type of node has two main purposes: first, to synchronize actions that a single robotic device (200) must carry out. For example, a robotic device can fly over a field while taking pictures and releasing marking marks at specific locations, all at the same time. The second purpose of the active transition nodes (AT) is to synchronize actions executed by two or more robotic devices (200). For example, two robotic devices synchronize their actions to build a brick wall. The first robotic device is responsible for spreading the cement and the second device is responsible for placing the bricks. When the first robotic device finishes its task, it informs the second robotic device to place the bricks. When the second robotic device finishes placing the bricks, it informs the other device to spread another layer of cement.)
EXAMINER NOTE: The user specifies the series of events (actions) and controls their execution (conditions of the actions). These events are translated to the graph, which is an abstraction of the series of events input by the user. The graph is implemented through Petri Nets with Active Transitions, which is ultimately an abstraction of the user-defined actions and reactions. In the brick-laying example, the user ultimately specifies that the second robotic device places the bricks after the first robot spreads cement
executing, by the real-time robotics control layer, a real-time session, including, at each tick of a real-time control cycle of the real-time session defined by the user input …
(Sanchez - [0044] Action line nodes (DA, UA), which determine actions to be executed, have two types of duration: defined and undefined. Action line nodes (AD) of defined duration are associated with actions that run during a limited, predictable time interval. …)
EXAMINER NOTE: See Annotated Fig. 5 below. Action line nodes have a defined duration. This duration is essentially a time window (a tick).
PNG
media_image1.png
870
637
media_image1.png
Greyscale
… performing operations comprising:
Executing, during the real-time session defined by the suer input, one or more respective commands for the first action,
Determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied,
and in response to the real-time robotics control layer determining that the condition of the first action of the plurality of actions is satisfied, switching, within the same real-time session defined by the user input, from executing the first action during a current tick of the real-time control cycle of the real-time session to executing the second action of the plurality of user-defined actions on the next tick of the real-time control cycle during the same real-time session,
(Sanchez - [0045] - In FIG. 5, the transition AT.sub.2 will only activate after actions DA.sub.1 and DA.sub.2 have completed; at this moment, the execution of action DA.sub.3 will begin which, upon completion, will activate transition AT.sub.3, thereby simultaneously starting actions DA.sub.4, DA.sub.5 and DA.sub.6. Transition AT.sub.4 begins actions UA7, of undefined duration, and DA.sub.8, of defined duration. Action UA.sub.7 will be executed until the active transition AT.sub.5 orders its completion as the execution of the action of defined duration DA.sub.8 was completed or because subsequent actions UA.sub.9 and UA.sub.10 meet their activation conditions.)
EXAMINER NOTE: During the first interval (tick), DA.sub.1 (first command) is performed. Once DA.sub.1 and DA.sub.2 have completed (condition), DA.sub.3 begins (second action) during the next time interval (next tick). Because the time interval is "limited" (see [0044] cited above), it meets applicant's definition of a "tick" (see p.6, lines 5-12)
(Sanchez - [0007] The present development relates to a system implementing a multi-agent BDI architecture to control in real time, and in a concurrent and cooperative manner, robotic devices reproducing a series of events defined by a user.)
EXAMINER NOTE: The control is done in real time.
Sanchez may not explicitly teach the following limitations in combination. However, Scholan teaches
wherein the robot is configured to enter a fault state if one or more commands are not received from the real-time robotics control layer within each tick of the real- time control cycle of the real-time session.
(Scholan - [0044] … Specifically, the safety device 314 can prevent communications between the main controller 312 and one or more of the components 302, 304 (or parts or components thereof) when it has been detected that the surgical robot system 300 is in a fault state. In some cases, the components or devices in the system that communicate with the main controller 312 may be configured to: receive communications from the main controller at a predetermined interval or frequency, and automatically transition into a safe state if they cease to receive such communications for a period of time (e.g. a predetermined number of intervals). Accordingly, cutting off communication between the main controller and a component or device may automatically cause that component or device to transition to a safe state.)
While Sanchez and Scholan may not explicitly state that the real-time session is defined by user input, this aspect is notoriously well-known in the art. As an example, Shah teaches a system wherein a user interacts with a device to provide inputs and start a control session.
(Shah - [0016] … The user 105 interacts with the robot simulation server 130 using the primary client device 110 in order to initialize, customize, begin, run, and monitor a robot simulation session. … The users 120 interact with the robot simulation server 130 using the user-controlled client devices 115, for instance by providing inputs to control the robot via the user-controlled client devices 115.
[0019] It should be noted that in various embodiments, the primary client device 110, the user-controlled client devices 115, and the machine-controlled client devices 125 overlap. For instance, the primary client device 110 can be used by the user 105 to request a robot simulation session, but can include an autonomous robot control program configured to control a robot during the simulation session autonomously. )
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement fault states as taught by Scholan into the robot control system taught by Sanchez in order to prevent undesired outcomes during malfunctions.
(Scholan - [0005] … If the system is not operating as expected there can be severe, if not, catastrophic consequences. Accordingly, it may be desirable to implement one or more safety mechanisms which are able to determine if there is a fault with the surgical robot system, and the control system 128 in particular, and if a fault is detected, put the system, or one or more components of the system, into a safe state.)
Regarding the aspect of a session defined by user input, Shah illustrates that this concept is not new, and one of ordinary skill in the art would be more than capable of making the session instantiable by the user, and the invention would have yielded the same predictable result.
Claim 2
Modified Sanchez teaches the limitations of claim 1 as outlined above. Sanchez further teaches
wherein each action of the plurality of actions causes the real-time robotics control layer to issue the one or more respective commands to different respective parts of the plurality of parts of the robot.
(Sanchez - [0075] On the other hand, and according to other embodiments of the present development, executing BDI agents (300) comprise a means-ends management module (390) which controls actions of the executing BDI agent (300) depending on their desires and intentions. The means-ends management module (390) execute intentions of the executing BDI agent (300) and sends signals to the action module (350) to control actuators (210) of the robotic device in a concurrent manner and in real time.)
EXAMINER NOTE: A plurality of actuators indicates different parts of a robot.
Claim 3
CLAIM INTERPRETATION NOTE: The examiner interprets a singular action to mean an action in which multiple robotic devices are coordinated or controlled simultaneously. In the instant specification (see page 11, lines 15-18), the applicant states, "a single action can define, e.g., a coordinated movement of a robotic arm and a conveyor belt. The single action can control, e.g., a motion of the robotic arm picking an object from the conveyor belt and, at the same time, a speed of the conveyor belt." A robotic arm and a conveyor belt are both robotic devices. In the case of Sanchez, the robotic devices are those controlled by BDI agents 300A and 300B, which will be discussed below with respect to Figure 5.
Modified Sanchez teaches the limitations of claim 1 as outlined above. Sanchez further teaches
wherein the user input further specifies:
(iv) an initial singular action, and
(Sanchez - [0007] The present development relates to a system implementing a multi-agent BDI architecture to control in real time, and in a concurrent and cooperative manner, robotic devices reproducing a series of events defined by a user.)
EXAMINER NOTE: Actions are defined by a user.
(Sanchez - [0045] FIG. 5 shows an example of a graph (910) implemented through a Petri Net with Active Transitions for the system (100) described herein. This example shows how Petri Nets with Active Transitions allow cooperation between agents in order to enable synchronization of actions. Action line nodes (DA, UA) represent actions executed by two BDI agents (300A) (300B), … In FIG. 5, the transition AT.sub.2 will only activate after actions DA.sub.1 and DA.sub.2 have completed; at this moment, the execution of action DA.sub.3 will begin which, upon completion, will activate transition AT.sub.3, thereby simultaneously starting actions DA.sub.4, DA.sub.5 and DA.sub.6. Transition AT.sub.4 begins actions UA7, of undefined duration, and DA.sub.8, of defined duration. Action UA.sub.7 will be executed until the active transition AT.sub.5 orders its completion as the execution of the action of defined duration DA.sub.8 was completed or because subsequent actions UA.sub.9 and UA.sub.10 meet their activation conditions.)
EXAMINER NOTE: The above passage is meant to illustrate the functioning of the Petri Net shown in Fig. 5. Gleaning from the above, as well as what is shown in Fig. 5 (reproduced below and annotated for clarity), it is determined that Sanchez teaches an initial singular action per the above interpretation of the term. Transitions AT3, AT4, AT5, and AT6 of Figure 5 may be thought of as the start of initial singular actions. Once these transitions are activated, the robotic devices controlled by BDI agents 300A and 300B begin their specified actions. For example, once AT6 is activated, BDI agent executes action DA11 while BDI agent 300B executes action DA12.
PNG
media_image2.png
817
620
media_image2.png
Greyscale
(v) a custom reaction associated with the initial singular action that causes the real- time robotics control layer to switch from executing commands associated with the initial singular action to executing commands associated with the plurality of actions.
EXAMINER NOTE: Continuing from the above discussion regarding Petri Nets and the initial singular action, once AT8 is activated, BDI agent 300A executes DA14 while BDI agent is idle. The transition AT8 and the action DA14 may be thought of as a custom reaction associated with the initial singular action that causes a switch from executing commands associated with the initial singular action to executing commands associated with a plurality of actions.
Claim 4
CLAIM INTERPRETATION NOTE: The examiner interprets a "part of the plurality of parts of the robot" to mean a robotic device among a plurality of robotic devices. In the instant specification (see page 11, lines 15-18), the applicant states, "a single action can define, e.g., a coordinated movement of a robotic arm and a conveyor belt. The single action can control, e.g., a motion of the robotic arm picking an object from the conveyor belt and, at the same time, a speed of the conveyor belt." A robotic arm and a conveyor belt are both robotic devices. In the case of Sanchez, the robotic devices are those controlled by BDI agents 300A and 300B.
Modified Sanchez teaches the limitations of claim 3 as outlined above. Sanchez further teaches
wherein the initial singular action is associated with commands that cause the plurality of parts of the robot to move together,
EXAMINER NOTE: Per the discussion above regarding claim 3, it is shown that multiple robot devices move together as a result of an initial singular action.
(Sanchez - [0021] … Finally, the lowest level of abstraction corresponds to the signals that executing BDI agents (300) send to robotic devices (200) to activate their actuators (210).)
EXAMINER NOTE: Activation of actuators indicates movement of robotic devices.
and wherein each of the plurality of actions is associated with one or more respective commands that cause each part of the plurality of parts of the robot to move independently.
EXAMINER NOTE: See annotated Figure 5 above under the discussion regarding claim 3. Actions DA1, DA2, DA14, and DA 15 are examples of actions in which each part of the plurality of parts of the robot move independently.
Claim 6
Modified Sanchez teaches the limitations of claim 1 as outlined above. Sanchez further teaches
wherein determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied comprises checking a state of the first action.
(Sanchez - [0041] a) Expropriation: the active transition has control over undefined duration actions (UA) on which it depends, and can end an action that has not been yet completed. To this end, actions subsequent to the active transition can verify if their activation conditions are met, in which case the expropriation mechanism of the active transition is activated.)
Claim 7
Modified Sanchez teaches the limitations of claim 1 as outlined above. Sanchez further teaches
wherein determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied comprises checking whether the first action has reached a goal state or has otherwise finished execution.
(Sanchez - [0041] a) Expropriation: the active transition has control over undefined duration actions (UA) on which it depends, and can end an action that has not been yet completed. To this end, actions subsequent to the active transition can verify if their activation conditions are met, in which case the expropriation mechanism of the active transition is activated.)
Claim 8
Modified Sanchez teaches the limitations of claim 1 as outlined above. Sanchez further teaches
wherein the custom reaction also specifies a second condition of a third action under which the real-time robotics control layer should initiate the real-time change in behavior.
(Sanchez - [0045] - In FIG. 5, the transition AT.sub.2 will only activate after actions DA.sub.1 and DA.sub.2 have completed; at this moment, the execution of action DA.sub.3 will begin which, upon completion, will activate transition AT.sub.3, thereby simultaneously starting actions DA.sub.4, DA.sub.5 and DA.sub.6. Transition AT.sub.4 begins actions UA7, of undefined duration, and DA.sub.8, of defined duration. Action UA.sub.7 will be executed until the active transition AT.sub.5 orders its completion as the execution of the action of defined duration DA.sub.8 was completed or because subsequent actions UA.sub.9 and UA.sub.10 meet their activation conditions.)
EXAMINER NOTE: Upon completion of DA.sub.3, the actions DA.sub4, DA.sub.5, and DA.sub.6 (any of which could be third actions) are initiated.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sanchez, Shah, and Scholan, and further in view of Gilliand (US 6282460 B1).
Claim 5
CLAIM INTERPRETATION NOTE: The examiner interprets a "final singular action" as a command to stop one or more robotic devices after a number of actions have been performed. Support for this interpretation may be found in the instant specification (p.18, ln 21-27). The specification states, "In some cases, user input can further specify: (iv) a final singular action, … For example, after the execution of each action associated with a respective robot arm is finished, the custom reaction can cause the real-time robotics control layer to execute commands associated with the default stop action …"
Modified Sanchez teaches the limitations of claim 1 as outlined above. Sanchez may not explicitly teach the following limitations in combination. However, Gilliand teaches
wherein the user input further specifies:
(iv) a final singular action, and
(Gilliand - [col 8, ln 50-53] The operator can then further modify the programs to insert additional stops or to modify the job of a robot so as to prevent the collision from occurring.)
(v) a custom reaction associated with the plurality of actions that causes the real-time robotics control layer to execute commands associated with the final singular action when the commands associated with the plurality of actions finished executing.
(Gilliand - [col 17, ln 25-40] To avoid collisions between the robots the present invention provides that the operator synchronizes the job programs for the robots. The process of synchronization requires the operator to insert a series of stop commands (stops) into the job for each robot, determine which stops are necessary to prevent collisions, add the conditions under which the program may be resumed, and remove unnecessary stops. Therefore, in the above example, robots 100A-100D would be executing their respective jobs and robots 100B and 100D would be heading for a collision. However, just prior to the collision point, robot 100D would encounter a stop in its job. The other robots 100 would continue in their operations because they had not encountered a stop command. Once these robots have completed the operations which remove them from the danger zone then they encounter a stop command.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement stop commands as taught by Gilliand into the robot control system taught by Sanchez as modified by Scholan in order to further synchronize robots and prevent collisions.
Claims 9-13, 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Sanchez, Shah, and Scholan, and further in view of Larsen (US 20230158662 A1).
Claim 9
Modified Sanchez teaches the limitations of claim 1 as outlined above. Sanchez further teaches
wherein determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied comprises:
obtaining one or more sensor measurements characterizing a part of the plurality of parts of the robot to which the real-time robotics control layer issues commands associated with the first action; and
(Sanchez - [0030] In the model of Petri Nets with Active Transitions disclosed herein, actions of the series of events (900) can be … so-called defined duration actions (DA), which autonomously end their execution as a consequence of an event detected by robotic device sensors)
Sanchez does not explicitly teach the following limitations. However, Larsen teaches
determining whether the one or more sensor measurements are above a predetermined threshold.
(Larsen - [0080] In some embodiments, the data processing module 110 may include an algorithm for selecting a programmed response to be effected based at least in part on the received data. The algorithm may select a response based entirely or in part on the data from the data input module. In a simple example, a length of time of close proximity (e.g., proximity less than a threshold distance) may be detected by a sensor, and the algorithm may determine a threshold condition (e.g., a threshold time) for action, and may specify the action.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement Larsen’s threshold into Sanchez’s decision making in order to have a quantifiable condition for determining an action.
Claim 10
Sanchez teaches
receiving user input that specifies:
(i) a real-time session with a robot
(Sanchez - [0019] The present development provides a system (100) implementing a multi-agent BDI architecture to control robotic devices (200), operating in a work area (700), in a concurrent and cooperative manner.
[0065] According to the present development, the system (100) disclosed herein further comprises an interpreting BDI agent (500) through which a user can interact with the system (100), define the series of events (900), control it in an asynchronous manner and receive information in real-time on their execution. According to a preferred embodiment, the interaction between the user and the interpreting BDI agent (500) is intuitively conducted by using natural language, whereby the user does not need specific technical knowledge. The interpreting BDI agent (500) comprises an authoring module (510) that has a user interface (511) through which the user of the system (100) specifies the series of events (900).)
EXAMINER NOTE: The user interacts with the interpreting BDI agent in real time through an interface. Because the interpreting BDI agent communicates with the system in real time, this indicates the existence of a real-time session.
…robot having a plurality of parts,
(Sanchez - [0021] … Finally, the lowest level of abstraction corresponds to the signals that executing BDI agents (300) send to robotic devices (200) to activate their actuators (210).
[0079] In the case of robotic drama, robots can be anthropomorphic, zoomorphic, or otherwise. Actuators usually move parts that resemble arms, legs and fingers, although it is also very common for the main locomotion mechanism to be wheels.
[0043] … For example, two robotic devices synchronize their actions to build a brick wall. The first robotic device is responsible for spreading the cement and the second device is responsible for placing the bricks. When the first robotic device finishes its task, it informs the second robotic device to place the bricks. When the second robotic device finishes placing the bricks, it informs the other device to spread another layer of cement.)
EXAMINER NOTE: Each of the above passages indicates the presence of robots having a plurality of parts.
(ii) code defining a plurality of user-defined actions, including a first action and a second action, and
(Sanchez - [0021] As shown in FIG. 1, the system (100) disclosed herein is characterized in that it comprises one or more executing BDI agents (300) in charge of controlling actions of robotic devices (200) so that it may execute the series of events, a director BDI agent (400) responsible for configuring each executing BDI agent (300) and monitoring the execution of actions they execute, and an interpreting BDI agent (500) through which a user of the system defines the series of events, monitors its status and progress, and sets control instructions. FIGS. 2 to 4 show detailed diagrams of the system (100) wherein each module constituting BDI agents (300, 400, 500), robotic devices (200) and the communication channel (610, 620, 630) are identified.)
EXAMINER NOTE: A "series of events" indicates the presence of at least two events (actions). In the context of a robotic environment, control instructions are conveyed to the robotic device through machine code, which is derived and specified from user input.
(iii) at least one custom reaction that specifies a condition of the first action under which a real-time robotics control layer should initiate a real-time change in behavior involving the second action of the plurality of user-defined actions;
(Sanchez - [0008] … an interpreting BDI agent including an authoring module having a user interface through which system users specify the series of events and control their execution, a translation module translating the series of events to the graph, and a monitoring module of the execution of the series of events.
[0043] Thus, according to a preferred embodiment of the invention, the graph (910) is implemented through an extension of Petri Nets with Active Transitions. … The active transition nodes (AT) are used to synchronize actions in the series of events (900). This type of node has two main purposes: first, to synchronize actions that a single robotic device (200) must carry out. For example, a robotic device can fly over a field while taking pictures and releasing marking marks at specific locations, all at the same time. The second purpose of the active transition nodes (AT) is to synchronize actions executed by two or more robotic devices (200). For example, two robotic devices synchronize their actions to build a brick wall. The first robotic device is responsible for spreading the cement and the second device is responsible for placing the bricks. When the first robotic device finishes its task, it informs the second robotic device to place the bricks. When the second robotic device finishes placing the bricks, it informs the other device to spread another layer of cement.)
EXAMINER NOTE: The user specifies the series of events (actions) and controls their execution (conditions of the actions). These events are translated to the graph, which is an abstraction of the series of events input by the user. The graph is implemented through Petri Nets with Active Transitions, which is ultimately an abstraction of the user-defined actions and reactions. In the brick-laying example, the user ultimately specifies that the second robotic device places the bricks after the first robot spreads cement
executing, by the real-time robotics control layer, a real-time session, including, at each tick of a real-time control cycle of the real-time session defined by the user input…
(Sanchez - [0044] Action line nodes (DA, UA), which determine actions to be executed, have two types of duration: defined and undefined. Action line nodes (AD) of defined duration are associated with actions that run during a limited, predictable time interval. …)
EXAMINER NOTE: See Annotated Fig. 5 below. Action line nodes have a defined duration. This duration is essentially a time window (a tick).
PNG
media_image1.png
870
637
media_image1.png
Greyscale
… performing operations comprising:
Executing, during the real-time session defined by the user input, one or more respective commands for the first action,
Determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied, and
in response to the real-time robotics control layer determining that the condition of the first action of the plurality of actions is satisfied, switching, within the same real-time session defined by the user input, from executing the first action during a current tick of the real-time control cycle of the real-time session to executing the second action of the plurality of user-defined actions on the next tick of the real- time control cycle during the same real-time session,
(Sanchez - [0045] - In FIG. 5, the transition AT.sub.2 will only activate after actions DA.sub.1 and DA.sub.2 have completed; at this moment, the execution of action DA.sub.3 will begin which, upon completion, will activate transition AT.sub.3, thereby simultaneously starting actions DA.sub.4, DA.sub.5 and DA.sub.6. Transition AT.sub.4 begins actions UA7, of undefined duration, and DA.sub.8, of defined duration. Action UA.sub.7 will be executed until the active transition AT.sub.5 orders its completion as the execution of the action of defined duration DA.sub.8 was completed or because subsequent actions UA.sub.9 and UA.sub.10 meet their activation conditions.)
EXAMINER NOTE: During the first interval (tick), DA.sub.1 (first command) is performed. Once DA.sub.1 and DA.sub.2 have completed (condition), DA.sub.3 begins (second action) during the next time interval (next tick). Because the time interval is "limited" (see [0044] cited above), it meets applicant's definition of a "tick" (see p.6, lines 5-12)
(Sanchez - [0007] The present development relates to a system implementing a multi-agent BDI architecture to control in real time, and in a concurrent and cooperative manner, robotic devices reproducing a series of events defined by a user.)
EXAMINER NOTE: The control is done in real time.
Sanchez may not explicitly teach the following limitations. However, Larsen teaches
A system comprising: one or more computers.
and one or more storage devices storing instructions that when executed by the one or more computers cause the one or more computers to perform operations comprising:
(Larsen - [0139] Embodiments of the present disclosure may comprise a system having a central processing unit (CPU) 520, a bus 530, a memory unit 540, a power supply unit (PSU) 550, and one or more Input/Output (I/O) units. The CPU 520 coupled to the memory unit 540 and the plurality of I/O units 560 via the bus 530, all of which are powered by the PSU 550. It should be understood that, in some embodiments, each disclosed unit may actually be a plurality of such units for the purposes of redundancy, high availability, and/or performance. The combination of the presently disclosed units is configured to perform the stages any method disclosed herein.)
Sanchez may not explicitly teach the following limitations. However, Scholan teaches
wherein the robot is configured to enter a fault state if one or more commands are not received from the real-time robotics control layer within each tick of the real-time control cycle of the real-time session.
(Scholan - [0044] … Specifically, the safety device 314 can prevent communications between the main controller 312 and one or more of the components 302, 304 (or parts or components thereof) when it has been detected that the surgical robot system 300 is in a fault state. In some cases, the components or devices in the system that communicate with the main controller 312 may be configured to: receive communications from the main controller at a predetermined interval or frequency, and automatically transition into a safe state if they cease to receive such communications for a period of time (e.g. a predetermined number of intervals). Accordingly, cutting off communication between the main controller and a component or device may automatically cause that component or device to transition to a safe state.)
While Sanchez and Scholan may not explicitly state that the real-time session is defined by user input, this aspect is notoriously well-known in the art. As an example, Shah teaches a system wherein a user interacts with a device to provide inputs and start a control session.
(Shah - [0016] … The user 105 interacts with the robot simulation server 130 using the primary client device 110 in order to initialize, customize, begin, run, and monitor a robot simulation session. … The users 120 interact with the robot simulation server 130 using the user-controlled client devices 115, for instance by providing inputs to control the robot via the user-controlled client devices 115.
[0019] It should be noted that in various embodiments, the primary client device 110, the user-controlled client devices 115, and the machine-controlled client devices 125 overlap. For instance, the primary client device 110 can be used by the user 105 to request a robot simulation session, but can include an autonomous robot control program configured to control a robot during the simulation session autonomously. )
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement the system taught by Sanchez on the computing hardware described by Larsen in order to provide a means for implementing the processes, and to implement fault states as taught by Scholan in order to prevent undesired outcomes during malfunctions.
(Scholan - [0005] … If the system is not operating as expected there can be severe, if not, catastrophic consequences. Accordingly, it may be desirable to implement one or more safety mechanisms which are able to determine if there is a fault with the surgical robot system, and the control system 128 in particular, and if a fault is detected, put the system, or one or more components of the system, into a safe state.)
Regarding the aspect of a session defined by user input, Shah illustrates that this concept is not new, and one of ordinary skill in the art would be more than capable of making the session instantiable by the user, and the invention would have yielded the same predictable result.
Claim 11
The combination of Sanchez, Scholan, Shah, and Larsen teaches the limitations of claim 10 as outlined above. Sanchez further teaches
wherein each action of the plurality of actions causes the real-time robotics control layer to issue the one or more respective commands to different respective parts of the plurality of parts of the robot.
(Sanchez - [0075] On the other hand, and according to other embodiments of the present development, executing BDI agents (300) comprise a means-ends management module (390) which controls actions of the executing BDI agent (300) depending on their desires and intentions. The means-ends management module (390) execute intentions of the executing BDI agent (300) and sends signals to the action module (350) to control actuators (210) of the robotic device in a concurrent manner and in real time.)
EXAMINER NOTE: A plurality of actuators indicates different parts of a robot.
CLAIM INTERPRETATION NOTE: The examiner interprets a singular action to mean an action in which multiple robotic devices are coordinated or controlled simultaneously. In the instant specification (see page 11, lines 15-18), the applicant states, "a single action can define, e.g., a coordinated movement of a robotic arm and a conveyor belt. The single action can control, e.g., a motion of the robotic arm picking an object from the conveyor belt and, at the same time, a speed of the conveyor belt." A robotic arm and a conveyor belt are both robotic devices. In the case of Sanchez, the robotic devices are those controlled by BDI agents 300A and 300B, which will be discussed below with respect to Figure 5.
Claim 12
The combination of Sanchez, Scholan, Shah, and Larsen teaches the limitations of claim 10 as outlined above. Sanchez further teaches
wherein the user input further specifies:
(iv) an initial singular action, and
(Sanchez - [0007] The present development relates to a system implementing a multi-agent BDI architecture to control in real time, and in a concurrent and cooperative manner, robotic devices reproducing a series of events defined by a user.)
EXAMINER NOTE: Actions are defined by a user.
(Sanchez - [0045] FIG. 5 shows an example of a graph (910) implemented through a Petri Net with Active Transitions for the system (100) described herein. This example shows how Petri Nets with Active Transitions allow cooperation between agents in order to enable synchronization of actions. Action line nodes (DA, UA) represent actions executed by two BDI agents (300A) (300B), … In FIG. 5, the transition AT.sub.2 will only activate after actions DA.sub.1 and DA.sub.2 have completed; at this moment, the execution of action DA.sub.3 will begin which, upon completion, will activate transition AT.sub.3, thereby simultaneously starting actions DA.sub.4, DA.sub.5 and DA.sub.6. Transition AT.sub.4 begins actions UA7, of undefined duration, and DA.sub.8, of defined duration. Action UA.sub.7 will be executed until the active transition AT.sub.5 orders its completion as the execution of the action of defined duration DA.sub.8 was completed or because subsequent actions UA.sub.9 and UA.sub.10 meet their activation conditions.)
EXAMINER NOTE: The above passage is meant to illustrate the functioning of the Petri Net shown in Fig. 5. Gleaning from the above, as well as what is shown in Fig. 5 (reproduced below and annotated for clarity), it is determined that Sanchez teaches an initial singular action per the above interpretation of the term. Transitions AT3, AT4, AT5, and AT6 of Figure 5 may be thought of as the start of initial singular actions. Once these transitions are activated, the robotic devices controlled by BDI agents 300A and 300B begin their specified actions. For example, once AT6 is activated, BDI agent executes action DA11 while BDI agent 300B executes action DA12.
PNG
media_image2.png
817
620
media_image2.png
Greyscale
(v) a custom reaction associated with the initial singular action that causes the real- time robotics control layer to switch from executing commands associated with the initial singular action to executing commands associated with the plurality of actions.
EXAMINER NOTE: Continuing from the above discussion regarding Petri Nets and the initial singular action, once AT8 is activated, BDI agent 300A executes DA14 while BDI agent is idle. The transition AT8 and the action DA14 may be thought of as a custom reaction associated with the initial singular action that causes a switch from executing commands associated with the initial singular action to executing commands associated with a plurality of actions.
Claim 13
CLAIM INTERPRETATION NOTE: The examiner interprets a "part of the plurality of parts of the robot" to mean a robotic device among a plurality of robotic devices. In the instant specification (see page 11, lines 15-18), the applicant states, "a single action can define, e.g., a coordinated movement of a robotic arm and a conveyor belt. The single action can control, e.g., a motion of the robotic arm picking an object from the conveyor belt and, at the same time, a speed of the conveyor belt." A robotic arm and a conveyor belt are both robotic devices. In the case of Sanchez, the robotic devices are those controlled by BDI agents 300A and 300B.
Sanchez, Scholan, Shah, and Larsen teaches the limitations of claim 12 as outlined above. Sanchez further teaches
wherein the initial singular action is associated with commands that cause the plurality of parts of the robot to move together,
EXAMINER NOTE: Per the discussion above regarding claim 12, it is shown that multiple robot devices move together as a result of an initial singular action.
(Sanchez - [0021] … Finally, the lowest level of abstraction corresponds to the signals that executing BDI agents (300) send to robotic devices (200) to activate their actuators (210).)
EXAMINER NOTE: Activation of actuators indicates movement of robotic devices.
and wherein each of the plurality of actions is associated with one or more respective commands that cause each part of the plurality of parts of the robot to move independently.
EXAMINER NOTE: See annotated Figure 5 above under the discussion regarding claim 12. Actions DA1, DA2, DA14, and DA 15 are examples of actions in which each part of the plurality of parts of the robot move independently.
Claim 15
The combination of Sanchez, Scholan, Shah, and Larsen teaches the limitations of claim 10 as outlined above. Sanchez further teaches
wherein determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied comprises checking a state of the first action.
(Sanchez - [0041] a) Expropriation: the active transition has control over undefined duration actions (UA) on which it depends, and can end an action that has not been yet completed. To this end, actions subsequent to the active transition can verify if their activation conditions are met, in which case the expropriation mechanism of the active transition is activated.)
Claim 16
The combination of Sanchez, Scholan, Shah and Larsen teaches the limitations of claim 10 as outlined above. Sanchez further teaches
wherein determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied comprises checking whether the first action has reached a goal state or has otherwise finished execution.
(Sanchez - [0041] a) Expropriation: the active transition has control over undefined duration actions (UA) on which it depends, and can end an action that has not been yet completed. To this end, actions subsequent to the active transition can verify if their activation conditions are met, in which case the expropriation mechanism of the active transition is activated.)
Claim 17
The combination of Sanchez, Scholan, Shah and Larsen teaches the limitations of claim 10 as outlined above. Sanchez further teaches
wherein the custom reaction also specifies a second condition of a third action under which the real-time robotics control layer should initiate the real-time change in behavior.
(Sanchez - [0045] - In FIG. 5, the transition AT.sub.2 will only activate after actions DA.sub.1 and DA.sub.2 have completed; at this moment, the execution of action DA.sub.3 will begin which, upon completion, will activate transition AT.sub.3, thereby simultaneously starting actions DA.sub.4, DA.sub.5 and DA.sub.6. Transition AT.sub.4 begins actions UA7, of undefined duration, and DA.sub.8, of defined duration. Action UA.sub.7 will be executed until the active transition AT.sub.5 orders its completion as the execution of the action of defined duration DA.sub.8 was completed or because subsequent actions UA.sub.9 and UA.sub.10 meet their activation conditions.)
EXAMINER NOTE: Upon completion of DA.sub.3, the actions DA.sub4, DA.sub.5, and DA.sub.6 (any of which could be third actions) are initiated.
Claim 18
The combination of Sanchez, Scholan, Shah and Larsen teaches the limitations of claim 10 as outlined above. Sanchez further teaches
wherein determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied comprises:
obtaining one or more sensor measurements characterizing a part of the plurality of parts of the robot to which the real-time robotics control layer issues commands associated with the first action; and
(Sanchez - [0030] In the model of Petri Nets with Active Transitions disclosed herein, actions of the series of events (900) can be … so-called defined duration actions (DA), which autonomously end their execution as a consequence of an event detected by robotic device sensors)
Sanchez does not explicitly teach the following limitations. However, Larsen teaches
determining whether the one or more sensor measurements are above a predetermined threshold.
(Larsen - [0080] In some embodiments, the data processing module 110 may include an algorithm for selecting a programmed response to be effected based at least in part on the received data. The algorithm may select a response based entirely or in part on the data from the data input module. In a simple example, a length of time of close proximity (e.g., proximity less than a threshold distance) may be detected by a sensor, and the algorithm may determine a threshold condition (e.g., a threshold time) for action, and may specify the action.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement Larsen’s threshold into Sanchez’s decision making in order to have a quantifiable condition for determining an action.
Claim 19
Sanchez teaches
receiving user input that specifies:
(i) a real-time session with a robot
(Sanchez - [0019] The present development provides a system (100) implementing a multi-agent BDI architecture to control robotic devices (200), operating in a work area (700), in a concurrent and cooperative manner.
[0065] According to the present development, the system (100) disclosed herein further comprises an interpreting BDI agent (500) through which a user can interact with the system (100), define the series of events (900), control it in an asynchronous manner and receive information in real-time on their execution. According to a preferred embodiment, the interaction between the user and the interpreting BDI agent (500) is intuitively conducted by using natural language, whereby the user does not need specific technical knowledge. The interpreting BDI agent (500) comprises an authoring module (510) that has a user interface (511) through which the user of the system (100) specifies the series of events (900).)
EXAMINER NOTE: The user interacts with the interpreting BDI agent in real time through an interface. Because the interpreting BDI agent communicates with the system in real time, this indicates the existence of a real-time session.
…robot having a plurality of parts,
(Sanchez - [0021] … Finally, the lowest level of abstraction corresponds to the signals that executing BDI agents (300) send to robotic devices (200) to activate their actuators (210).
[0079] In the case of robotic drama, robots can be anthropomorphic, zoomorphic, or otherwise. Actuators usually move parts that resemble arms, legs and fingers, although it is also very common for the main locomotion mechanism to be wheels.
[0043] … For example, two robotic devices synchronize their actions to build a brick wall. The first robotic device is responsible for spreading the cement and the second device is responsible for placing the bricks. When the first robotic device finishes its task, it informs the second robotic device to place the bricks. When the second robotic device finishes placing the bricks, it informs the other device to spread another layer of cement.)
EXAMINER NOTE: Each of the above passages indicates the presence of robots having a plurality of parts.
(ii) code defining a plurality of user-defined actions, including a first action and a second action, and
(Sanchez - [0021] As shown in FIG. 1, the system (100) disclosed herein is characterized in that it comprises one or more executing BDI agents (300) in charge of controlling actions of robotic devices (200) so that it may execute the series of events, a director BDI agent (400) responsible for configuring each executing BDI agent (300) and monitoring the execution of actions they execute, and an interpreting BDI agent (500) through which a user of the system defines the series of events, monitors its status and progress, and sets control instructions. FIGS. 2 to 4 show detailed diagrams of the system (100) wherein each module constituting BDI agents (300, 400, 500), robotic devices (200) and the communication channel (610, 620, 630) are identified.)
EXAMINER NOTE: A "series of events" indicates the presence of at least two events (actions). In the context of a robotic environment, control instructions are conveyed to the robotic device through machine code, which is derived and specified from user input.
(iii) at least one custom reaction that specifies a condition of the first action under which a real-time robotics control layer should initiate a real-time change in behavior involving the second action of the plurality of user-defined actions;
(Sanchez - [0008] … an interpreting BDI agent including an authoring module having a user interface through which system users specify the series of events and control their execution, a translation module translating the series of events to the graph, and a monitoring module of the execution of the series of events.
[0043] Thus, according to a preferred embodiment of the invention, the graph (910) is implemented through an extension of Petri Nets with Active Transitions. … The active transition nodes (AT) are used to synchronize actions in the series of events (900). This type of node has two main purposes: first, to synchronize actions that a single robotic device (200) must carry out. For example, a robotic device can fly over a field while taking pictures and releasing marking marks at specific locations, all at the same time. The second purpose of the active transition nodes (AT) is to synchronize actions executed by two or more robotic devices (200). For example, two robotic devices synchronize their actions to build a brick wall. The first robotic device is responsible for spreading the cement and the second device is responsible for placing the bricks. When the first robotic device finishes its task, it informs the second robotic device to place the bricks. When the second robotic device finishes placing the bricks, it informs the other device to spread another layer of cement.)
EXAMINER NOTE: The user specifies the series of events (actions) and controls their execution (conditions of the actions). These events are translated to the graph, which is an abstraction of the series of events input by the user. The graph is implemented through Petri Nets with Active Transitions, which is ultimately an abstraction of the user-defined actions and reactions. In the brick-laying example, the user ultimately specifies that the second robotic device places the bricks after the first robot spreads cement
and executing, by the real-time robotics control layer, a real-time session, including, at each tick of a real-time control cycle of the real-time session defined by the user input,
(Sanchez - [0044] Action line nodes (DA, UA), which determine actions to be executed, have two types of duration: defined and undefined. Action line nodes (AD) of defined duration are associated with actions that run during a limited, predictable time interval. …)
EXAMINER NOTE: See Annotated Fig. 5 below. Action line nodes have a defined duration. This duration is essentially a time window (a tick).
PNG
media_image1.png
870
637
media_image1.png
Greyscale
… performing operations comprising:
Executing, during the real-time session defined by the user input, one or more respective commands for the first action,
Determining, by the real-time robotics control layer during the real-time session defined by the user input, whether the condition of the first action of the plurality of actions is satisfied, and
in response to the real-time robotics control layer determining that the condition of the first action of the plurality of actions is satisfied, switching, within the same real-time session defined by the user input, from executing the first action during a current tick of the real-time control cycle of the real-time session to executing the second action of the plurality of user-defined actions on the next tick of the real-time control cycle during the same real-time session,
(Sanchez - [0045] - In FIG. 5, the transition AT.sub.2 will only activate after actions DA.sub.1 and DA.sub.2 have completed; at this moment, the execution of action DA.sub.3 will begin which, upon completion, will activate transition AT.sub.3, thereby simultaneously starting actions DA.sub.4, DA.sub.5 and DA.sub.6. Transition AT.sub.4 begins actions UA7, of undefined duration, and DA.sub.8, of defined duration. Action UA.sub.7 will be executed until the active transition AT.sub.5 orders its completion as the execution of the action of defined duration DA.sub.8 was completed or because subsequent actions UA.sub.9 and UA.sub.10 meet their activation conditions.)
EXAMINER NOTE: During the first interval (tick), DA.sub.1 (first command) is performed. Once DA.sub.1 and DA.sub.2 have completed (condition), DA.sub.3 begins (second action) during the next time interval (next tick). Because the time interval is "limited" (see [0044] cited above), it meets applicant's definition of a "tick" (see p.6, lines 5-12)
(Sanchez - [0007] The present development relates to a system implementing a multi-agent BDI architecture to control in real time, and in a concurrent and cooperative manner, robotic devices reproducing a series of events defined by a user.)
EXAMINER NOTE: The control is done in real time.
Sanchez may not explicitly teach the following limitations. However, Scholan teaches
wherein the robot is configured to enter a fault state if one or more commands are not received from the real-time robotics control layer within each tick of the real- time control cycle of the real-time session.
(Scholan - [0044] … Specifically, the safety device 314 can prevent communications between the main controller 312 and one or more of the components 302, 304 (or parts or components thereof) when it has been detected that the surgical robot system 300 is in a fault state. In some cases, the components or devices in the system that communicate with the main controller 312 may be configured to: receive communications from the main controller at a predetermined interval or frequency, and automatically transition into a safe state if they cease to receive such communications for a period of time (e.g. a predetermined number of intervals). Accordingly, cutting off communication between the main controller and a component or device may automatically cause that component or device to transition to a safe state.)
Sanchez may not explicitly teach the following limitations. However, Larsen teaches
One or more non-transitory computer storage media storing instructions that when executed by one or more computers cause the one or more computers to perform operations
(Larsen - [0139] Embodiments of the present disclosure may comprise a system having a central processing unit (CPU) 520, a bus 530, a memory unit 540, a power supply unit (PSU) 550, and one or more Input/Output (I/O) units. The CPU 520 coupled to the memory unit 540 and the plurality of I/O units 560 via the bus 530, all of which are powered by the PSU 550. It should be understood that, in some embodiments, each disclosed unit may actually be a plurality of such units for the purposes of redundancy, high availability, and/or performance. The combination of the presently disclosed units is configured to perform the stages any method disclosed herein.)
While Sanchez and Scholan may not explicitly state that the real-time session is defined by user input, this aspect is notoriously well-known in the art. As an example, Shah teaches a system wherein a user interacts with a device to provide inputs and start a control session.
(Shah - [0016] … The user 105 interacts with the robot simulation server 130 using the primary client device 110 in order to initialize, customize, begin, run, and monitor a robot simulation session. … The users 120 interact with the robot simulation server 130 using the user-controlled client devices 115, for instance by providing inputs to control the robot via the user-controlled client devices 115.
[0019] It should be noted that in various embodiments, the primary client device 110, the user-controlled client devices 115, and the machine-controlled client devices 125 overlap. For instance, the primary client device 110 can be used by the user 105 to request a robot simulation session, but can include an autonomous robot control program configured to control a robot during the simulation session autonomously. )
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement the system taught by Sanchez on the computing hardware described by Larsen in order to provide a means for implementing the processes, and to implement Scholan’s fault states in order to prevent undesired outcomes during malfunctions.
(Scholan - [0005] … If the system is not operating as expected there can be severe, if not, catastrophic consequences. Accordingly, it may be desirable to implement one or more safety mechanisms which are able to determine if there is a fault with the surgical robot system, and the control system 128 in particular, and if a fault is detected, put the system, or one or more components of the system, into a safe state.)
Regarding the aspect of a session defined by user input, Shah illustrates that this concept is not new, and one of ordinary skill in the art would be more than capable of making the session instantiable by the user, and the invention would have yielded the same predictable result.
Claim 20
The combination of Sanchez, Scholan, Shah, and Larsen teaches the limitations of claim 19 as outlined above. Sanchez further teaches
wherein each action of the plurality of actions causes the real-time robotics control layer to issue the one or more respective commands to different respective parts of the plurality of parts of the robot.
(Sanchez - [0075] On the other hand, and according to other embodiments of the present development, executing BDI agents (300) comprise a means-ends management module (390) which controls actions of the executing BDI agent (300) depending on their desires and intentions. The means-ends management module (390) execute intentions of the executing BDI agent (300) and sends signals to the action module (350) to control actuators (210) of the robotic device in a concurrent manner and in real time.)
EXAMINER NOTE: A plurality of actuators indicates different parts of a robot.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Sanchez in view of Scholan, Shah, and Larsen (US 20230158662 A1), and further in view of Gilliand (US 6282460 B1).
Claim 14
CLAIM INTERPRETATION NOTE: The examiner interprets a "final singular action" as a command to stop one or more robotic devices after a number of actions have been performed. Support for this interpretation may be found in the instant specification (p.18, ln 21-27). The specification states, "In some cases, user input can further specify: (iv) a final singular action, … For example, after the execution of each action associated with a respective robot arm is finished, the custom reaction can cause the real-time robotics control layer to execute commands associated with the default stop action …"
The combination of Sanchez, Scholan, Shah, and Larsen teaches the limitations of claim 10 as outlined above. The cited combination may not explicitly teach the following limitations, however, Gilliand teaches
wherein the user input further specifies:(iv) a final singular action,
(Gilliand - [col 8, ln 50-53] The operator can then further modify the programs to insert additional stops or to modify the job of a robot so as to prevent the collision from occurring.)
and (v) a custom reaction associated with the plurality of actions that causes the real-time robotics control layer to execute commands associated with the final singular action when the commands associated with the plurality of actions finished executing.
(Gilliand - [col 17, ln 25-40] To avoid collisions between the robots the present invention provides that the operator synchronizes the job programs for the robots. The process of synchronization requires the operator to insert a series of stop commands (stops) into the job for each robot, determine which stops are necessary to prevent collisions, add the conditions under which the program may be resumed, and remove unnecessary stops. Therefore, in the above example, robots 100A-100D would be executing their respective jobs and robots 100B and 100D would be heading for a collision. However, just prior to the collision point, robot 100D would encounter a stop in its job. The other robots 100 would continue in their operations because they had not encountered a stop command. Once these robots have completed the operations which remove them from the danger zone then they encounter a stop command.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to implement stop commands as taught by Gilliand into the robot control system taught by Sanchez in order to further synchronize robots and prevent collisions.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US-20210362328-A1, US-20150314440-A1, US-20120010772-A1 and US-20150217449-A1 pertain to rule-based organizing of robot actions, and teach various aspects of the invention – particularly those of claims 1, 2, 6 and 7.
US-9616570-B2 pertains to rules-based organizing of robot actions, and teaches checking conditions of actions (such as in claims 6 and 7), as well as what may be interpreted as “singular actions” such as in claims 3, 4, and 5.
WO 2020130662 A1 and CN 110267773 A also pertain to rules-based organizing of robot actions while specifically using sensor thresholds, and are considered relevant to claim 9.
US 20230185278 A1 pertains to determination of a robot fault state when a command is not received during a given timeframe.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES MILLER WATTS whose telephone number is (703)756-1249. The examiner can normally be reached 7:30-5:30 M-TH.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on 571-270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES MILLER WATTS III/Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657