Prosecution Insights
Last updated: April 19, 2026
Application No. 17/799,711

CONTROL DEVICE, CONTROL METHOD AND STORAGE MEDIUM

Final Rejection §103
Filed
Aug 15, 2022
Examiner
EVANS, KARSTON G
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Corporation
OA Round
4 (Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
100 granted / 143 resolved
+17.9% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 143 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The amendment filed 9/30/2025 has been entered. Claims 1, 9, 12, and 13 are amended. Claims 1-7 and 9-13 remain pending in the application. Applicant’s amendments to the claims have overcome each and every 112(b) rejection set forth in the Final Office Action mailed 12/18/2024. Applicant’s arguments, see pages 8-10, with respect to the 112(b) rejections of claims 9-11 are fully considered and are persuasive. The 112(b) rejections are withdrawn accordingly. Applicant’s arguments, see pages 11-12, with respect to the cited prior art not teaching the amended features have been fully considered and are not persuasive. The applicant argues that cost is totally different from the claimed “utility” because the applicant has a narrower interpretation in which utility describes a priority and Laftchiev describes a totally opposite concept. However, the claim does not mention any priorities and the claim limitation uses claim language which may be interpreted more broadly than the applicant argues. Under broad reasonable interpretation (BRI) of the claim, the cost to fast action of the robot (as taught by at least [0107-0108] of Laftchiev is equivalent to a weight for the utility of the utility function. Additionally, Laftchiev teaches the amended features because the cost is used by the robot/human model such that the robot can more effectively to assist both efficient/faster and inefficient/slower human workers (e.g., by changing speed of the robot according to at least [0095]). The claim merely specifies that the robot assists a working body with a lower work efficiency and does not limit that ‘only’ those with a lower work efficiency receive assistance. Under BRI, Laftchiev covers this claim limitation by assisting both efficient/faster and inefficient/slower human workers. Accordingly, the cited prior art is maintained in the prior art rejections. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-7 and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Murphy (US 10471597 B1) in view of Johnson (US 20200086487 A1) and Laftchiev (US 20210173377 A1). Regarding Claim 1, Murphy teaches A control device comprising: at least one memory storing instructions; and at least one processor configured to execute the instructions to: (“The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.” See at least col. 20, lines 54-59) receive a detection signal output by a detection device having a detection range that includes a workspace of a robot and a working body other than the robot, which together perform cooperative work; (“The stereo camera devices 220(1)-(A) generally represent camera devices that are oriented so as to capture digital images of the item 250.” See at least col. 8, lines 18-31; “the robotic arm controller 390 could capture one or more images of the retrieval location (e.g., a particular holder or tote within an automated product distribution center) using one or more camera devices.” See at least col. 10, lines 58-63; See at least fig. 9 and col. 17, lines 1-9 for a second robotic arm performing cooperative work in the workspace.; Examiner Interpretation: The camera is a detection device and its range includes at least a portion of the workspace because it is able to capture images of objects, holders, and or totes within the workspace. The claim does not specify how much of the workspace the detection range includes, only that the workspace includes a robot and a another working body.) recognize types and states of objects present in the workspace, based on the detection signal, wherein the objects include (“the robotic arm controller 390 could capture images of the item in the particular location and determine which of a plurality of pre-generated 3D models best matches the appearance of the item in the captured images. The robotic arm controller 390 could then retrieve the 3D model that best matches the item, for use in retrieving the item. Once the 3D model is retrieved, the robotic arm controller 390 can then use the retrieved 3D model 353 for one or more object identification operations. … the robotic arm controller 390 determines an estimated pose of the particular item at the designated location, using the 3D model.” See at least col. 10, line 64 through col. 11, line 30; Also see at least col. 16, lines 4-8 for item recognition for a second item.) generate, based on recognition results relating to the states of the target object (“The robotic arm controller 390 can then use the estimated pose of the particular item to determine an optimal way to control the robotic picking arm 392, in order to best retrieve the particular item. … The robotic arm controller 390 could determine a location of the optimal surface, based on the particular item's current pose, and manipulate the robotic picking arm 392 to grasp the particular item by the optimal surface from a determined optimal angle. … the robotic arm controller 390 determines an optimal manner to release a given object using a corresponding one of the 3D models 353.” See at least col. 11, lines 30-48) Murphy does not explicitly teach, Johnson teaches recognize types and states of objects present in the workspace, based on the detection signal, wherein the objects include the working body other than the robot; … generate, based on recognition results relating to the states of … the working body other than the robot, an operation sequence for the robot; (“While, at points, embodiments are described herein as preventing robot-human collision, embodiments of the present disclosure are not so limited and can be used to prevent collision between robots and any objects. For example, embodiments can extend to objects other than people (e.g., animals) or other robots which are not necessarily networked to the current robot.” See at least [0046]; “The method 330, at 331, detects a type and a location of an object based on a camera image of the object. … The method 330 continues at 332 by predicting motion of the object based on at least one of (i) the detected type of the object, (ii) the detected location of the object, and (iii) a model of object motion. … To continue, at 333, a motion plan for a robot is generated that avoids having the robot collide with the object. According to an embodiment, the motion plan is generated at 333 based on the predicted motion of the object.” See at least [0051-0053]) wherein the other working body includes a plurality of other working bodies, (“controlling a robot to move in such a way so as to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task.” See at least [0046], wherein humans and other robots are a plurality of other working bodies.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Murphy to further include the teachings of Johnson with a reasonable expectation of success “to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task. Embodiments may be employed in shared workspaces so that a robot and a human co-worker can safely collaborate.” (See at least [0006]; Also see at least [0080] for improving trajectory generation.) Johnson also does not explicitly teach, but Laftchiev teaches wherein the at least one processor is configured to execute the instructions to design a utility function to weight utilities for work of the plurality of other working bodies based on work efficiencies of the plurality of working bodies to assist a working body with a lower work efficiency among the work efficiencies, and wherein each weight for the utilities decreases with increasing work efficiency. (“a robot could be trained to slow its actions to match the worker actions thereby reducing errors. Alternatively, a robot could be trained to bring the parts closer to the worker to improve focus. Overall, the series of manufacturing events might be rerouted such that other workers replace some of the load observed by the current worker. This means that the statistical models monitoring the worker can also be used to improve the human-robot manufacturing process adaptively with respect to the worker condition.” See at least [0095]; “a predictive model can be learned to infer quantities like predicted completion time or predicted worker movements and a classification model to infer a state of the worker(s) and the task on which the worker is currently working. Then second, we initialize a control law or policy that provides the robot to perform reasonably well in the operations the robot has to accomplish to help the worker. … Here the robot only has one possible action speed up or slow down. The human has only two possible states tired or energetic. The robot must learn that when the human is energetic it can have a faster speed and when the human is tired it can have a slower speed. When the statistical models are learned the robot will learn that there is a higher cost to fast action if the worker is tired, and an optimal cost to lower speed. The same is true in reverse if the worker is energetic. … it is advantageous to first learn a model of the human state evolution and include this in the combined human/robot model.” See at least [0107-0108]; Examiner Interpretation: The cost to fast action of the robot is equivalent to a weight for the utility. An energetic worker is equivalent to a working body with increased work efficiency. Cost to fast action is decreased when the worker is energetic, this is equivalent to the weight for the utility being decreased with increased work efficiency of the working body (worker). The cost is advantageous for the robot/human model such that the robot can more effectively to assist both efficient and inefficient human workers.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Murphy and Johnson to further include the teachings of Laftchiev with a reasonable expectation of success because “understanding of the typical performance and state of the human worker means that this information can be used in the collaboration between the robot and the human itself such that the combined human/robot performance can be improved. These improvements stein from an optimization of the interaction of the robot with the human worker.” (See at least [0009]) Regarding Claim 2, Murphy does not explicitly teach, but Johnson teaches wherein the at least one processor is configured to execute the instructions to: recognize an operation by the working body based on the detection signal and prior information relating to the operation by the working body; (“The method 330, at 331, detects a type and a location of an object based on a camera image of the object. … The method 330 continues at 332 by predicting motion of the object based on at least one of (i) the detected type of the object, (ii) the detected location of the object, and (iii) a model of object motion.” See at least [0051-0052], wherein the predicted motion is a recognized operation by the working body. The type and location are from the detection signal of the camera. And (i) the detected type of the object, (ii) the detected location of the object, and (iii) a model of object motion are all interpreted as prior information relating to the operation by the working body.) determine, based on the recognition results relating to the operation by the working body, a model of abstracted dynamics of the working body; and generate the operation sequence based on the model and the recognition results relating to the types and the states of the objects. (“determining, a model of future motion of the obstacle 448 based on the predicted future motion 447. With the model of future motion 448, embodiments can generate motion plans for a robot that avoid collision with objects, e.g., a human co-worker.” See at least [0077]; “a more accurate model of the obstacle can be used for collision avoidance. In such an embodiment, the obstacle, e.g., human, is modeled as a time-varying obstacle with the volume the obstacle occupies at each specific point in time or time step in a timed simulation. By adding time, as an additional dimension (degree of freedom) to the world model, a collision-free path planning method according to an embodiment finds a path which accommodates the motion of the obstacle as it is predicted to occur in time.” See at least [0080], wherein the path/motion planning generates an operation sequence.; See at least [0067] for the object identification/recognition; See at least [0046] where the obstacle avoidance can be extended to people, other robots, and inanimate objects.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Johnson with a reasonable expectation of success “to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task. Embodiments may be employed in shared workspaces so that a robot and a human co-worker can safely collaborate.” (see at least [0006]; Also see at least [0080] for improving trajectory generation.) Regarding Claim 3, Murphy does not explicitly teach, but Johnson teaches wherein the at least one processor is configured to execute the instructions to determine the model based on the prior information relating to the model of the abstracted dynamics of the working body for each of a plurality of candidate operations of the operation. (“the obstacle, e.g., human, is modeled as a time-varying obstacle with the volume the obstacle occupies at each specific point in time or time step in a timed simulation.” See at least [0080]; “A variety of physics-based dynamics models may also be used to predict motion. The various physics-based dynamics models that may be used are all characterized by a model, which, upon given the current state of the system, predicts a future state based on the laws of physics. In one such embodiment, in order to choose which equations to include in the physics-based model, a library of plausible models is created, and the correct model is selected by matching the output of a neural net classifier which determines the object type to the appropriate model.” See at least [0084], wherein the variety of physics-based models are interpreted to be prior information relating to the model of the abstracted dynamics of the working body for each of a plurality of candidate operations of the operation.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Johnson with a reasonable expectation of success “to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task. Embodiments may be employed in shared workspaces so that a robot and a human co-worker can safely collaborate.” (see at least [0006]; Also see at least [0080] for improving trajectory generation.) Regarding Claim 4, Murphy does not explicitly teach, but Johnson teaches wherein the at least one processor is configured to further execute the instructions to learn parameters of the model based on the recognition results relating to the operation by the working body. (“In an example embodiment where a neural network is used for predicting motion of the object, i.e., human, the network is trained on examples of object motion that are for the appropriate domain for the task in question. For instance, in the example where the object being avoided is a human, the neural network is trained on examples of human motion in quick service restaurants (or the appropriate domain for the task in question). Such an embodiment predicts the long-term motion of key-points which are identified on the object and then estimates their motion into the future.” See at least [0083]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Johnson with a reasonable expectation of success “to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task. Embodiments may be employed in shared workspaces so that a robot and a human co-worker can safely collaborate.” (see at least [0006]; Also see at least [0080] for improving trajectory generation.) Regarding Claim 5, Murphy does not explicitly teach, but Johnson teaches wherein the recognition results relating to the operation by the other working body includes recognition results relating to an ongoing operation and a predicted operation to be executed by the working body, and wherein the at least one processor is configured to execute the instructions to generate the operation sequence based on the recognition results relating to the ongoing operation and the predicted operation to be executed by the working body. (“the 3D human pose 445 is processed using a deep recurrent neural network (RNN) to predict future motion (e.g., generate the 3D motion prediction 447) of the human based on the past motion. … The method 440 continues by determining, a model of future motion of the obstacle 448 based on the predicted future motion 447. With the model of future motion 448, embodiments can generate motion plans for a robot that avoid collision with objects, e.g., a human co-worker.” See at least [0076-0077]; “The various physics-based dynamics models that may be used are all characterized by a model, which, upon given the current state of the system, predicts a future state based on the laws of physics.” See at least [0084]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Johnson with a reasonable expectation of success “to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task. Embodiments may be employed in shared workspaces so that a robot and a human co-worker can safely collaborate.” (see at least [0006]; Also see at least [0080] for improving trajectory generation.) Regarding Claim 6, Murphy does not explicitly teach, but Laftchiev teaches wherein the at least one processor is configured to execute the instructions to generate the operation sequence based on a work efficiency of each of the plurality of other working bodies. (“the robot can slow its actions to match those of a tired human worker. Alternate examples of robot actions could include performing additional tasks, calling a supervisor for help, holding parts closer to the human worker (at the cost of additional time), improving the comfort of the immediate area by adding light, heating/cooling, etc., and others. This can be understood that the process control system can be designed to optimize the process at the human-robot collaboration level, by adjusting the help that the robot is providing to the human worker subject to the condition of the worker.” See at least [0009]; “a classification model to infer a state of the worker(s) and the task on which the worker is currently working. Then second, we initialize a control law or policy that provides the robot to perform reasonably well in the operations the robot has to accomplish to help the worker.” See at least [0107] It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Laftchiev with a reasonable expectation of success because “understanding of the typical performance and state of the human worker means that this information can be used in the collaboration between the robot and the human itself such that the combined human/robot performance can be improved. These improvements stein from an optimization of the interaction of the robot with the human worker.” (See at least [0009]) Regarding Claim 7, Murphy does not explicitly teach, but Laftchiev teaches wherein the at least one processor is configured to execute the instructions to optimize the utility function to generate the operation sequence. (“the robot can slow its actions to match those of a tired human worker. Alternate examples of robot actions could include performing additional tasks, calling a supervisor for help, holding parts closer to the human worker (at the cost of additional time), improving the comfort of the immediate area by adding light, heating/cooling, etc., and others. This can be understood that the process control system can be designed to optimize the process at the human-robot collaboration level, by adjusting the help that the robot is providing to the human worker subject to the condition of the worker.” See at least [0009]; “A policy for the robot based on the robot model 607B and on the task 617B, the robot has to achieve, e.g., help the human worker in the assembly line task, is computed. The policy can be achieved with any policy optimization algorithm 612B, with model based reinforcement learning or optimal control as described above. When, the joint human-robot model 605B is computed, this can be used to improve the robot policy 612B, that can be updated considering not only the robot model 607B and the task 617B, but also having information of the human model 604B, in order to have new robot policy 612B.” See at least [0110]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Laftchiev with a reasonable expectation of success because “understanding of the typical performance and state of the human worker means that this information can be used in the collaboration between the robot and the human itself such that the combined human/robot performance can be improved. These improvements stein from an optimization of the interaction of the robot with the human worker.” (See at least [0009]) Regarding Claim 12, Murphy teaches A control method performed by a computer and comprising: (“The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.” See at least col. 20, lines 54-59) receiving a detection signal output by a detection device having a detection range that includes a workspace of a robot and a working body other than the robot, which together perform cooperative work; (“The stereo camera devices 220(1)-(A) generally represent camera devices that are oriented so as to capture digital images of the item 250.” See at least col. 8, lines 18-31; “the robotic arm controller 390 could capture one or more images of the retrieval location (e.g., a particular holder or tote within an automated product distribution center) using one or more camera devices.” See at least col. 10, lines 58-63; See at least fig. 9 and col. 17, lines 1-9 for a second robotic arm performing cooperative work in the workspace.; Examiner Interpretation: The camera is a detection device and its range includes at least a portion of the workspace because it is able to capture images of objects, holders, and or totes within the workspace. The claim does not specify how much of the workspace the detection range includes, only that the workspace includes a robot and a another working body.) recognizing types and states of objects present in the workspace, based on the detection signal, wherein the objects include (“the robotic arm controller 390 could capture images of the item in the particular location and determine which of a plurality of pre-generated 3D models best matches the appearance of the item in the captured images. The robotic arm controller 390 could then retrieve the 3D model that best matches the item, for use in retrieving the item. Once the 3D model is retrieved, the robotic arm controller 390 can then use the retrieved 3D model 353 for one or more object identification operations. … the robotic arm controller 390 determines an estimated pose of the particular item at the designated location, using the 3D model.” See at least col. 10, line 64 through col. 11, line 30; Also see at least col. 16, lines 4-8 for item recognition for a second item.) generating, based on recognition results relating to the states of the target object (“The robotic arm controller 390 can then use the estimated pose of the particular item to determine an optimal way to control the robotic picking arm 392, in order to best retrieve the particular item. … The robotic arm controller 390 could determine a location of the optimal surface, based on the particular item's current pose, and manipulate the robotic picking arm 392 to grasp the particular item by the optimal surface from a determined optimal angle. … the robotic arm controller 390 determines an optimal manner to release a given object using a corresponding one of the 3D models 353.” See at least col. 11, lines 30-48) Murphy does not explicitly teach, Johnson teaches recognizing types and states of objects present in the workspace, based on the detection signal, wherein the objects include the working body other than the robot; … generating, based on recognition results relating to the states of … the working body other than the robot, an operation sequence for the robot; (“While, at points, embodiments are described herein as preventing robot-human collision, embodiments of the present disclosure are not so limited and can be used to prevent collision between robots and any objects. For example, embodiments can extend to objects other than people (e.g., animals) or other robots which are not necessarily networked to the current robot.” See at least [0046]; “The method 330, at 331, detects a type and a location of an object based on a camera image of the object. … The method 330 continues at 332 by predicting motion of the object based on at least one of (i) the detected type of the object, (ii) the detected location of the object, and (iii) a model of object motion. … To continue, at 333, a motion plan for a robot is generated that avoids having the robot collide with the object. According to an embodiment, the motion plan is generated at 333 based on the predicted motion of the object.” See at least [0051-0053]) wherein the other working body includes a plurality of other working bodies, (“controlling a robot to move in such a way so as to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task.” See at least [0046], wherein humans and other robots are a plurality of other working bodies.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Murphy to further include the teachings of Johnson with a reasonable expectation of success “to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task. Embodiments may be employed in shared workspaces so that a robot and a human co-worker can safely collaborate.” (See at least [0006]; Also see at least [0080] for improving trajectory generation.) Johnson also does not explicitly teach, but Laftchiev teaches wherein the control method comprises: designing a utility function to weight utilities for work of the plurality of other working bodies based on work efficiencies of the plurality of working bodies to assist a working body with a lower work efficiency among the work efficiencies, and wherein each weight for the utilities decreases with increasing work efficiency. (“a robot could be trained to slow its actions to match the worker actions thereby reducing errors. Alternatively, a robot could be trained to bring the parts closer to the worker to improve focus. Overall, the series of manufacturing events might be rerouted such that other workers replace some of the load observed by the current worker. This means that the statistical models monitoring the worker can also be used to improve the human-robot manufacturing process adaptively with respect to the worker condition.” See at least [0095]; “a predictive model can be learned to infer quantities like predicted completion time or predicted worker movements and a classification model to infer a state of the worker(s) and the task on which the worker is currently working. Then second, we initialize a control law or policy that provides the robot to perform reasonably well in the operations the robot has to accomplish to help the worker. … Here the robot only has one possible action speed up or slow down. The human has only two possible states tired or energetic. The robot must learn that when the human is energetic it can have a faster speed and when the human is tired it can have a slower speed. When the statistical models are learned the robot will learn that there is a higher cost to fast action if the worker is tired, and an optimal cost to lower speed. The same is true in reverse if the worker is energetic. … it is advantageous to first learn a model of the human state evolution and include this in the combined human/robot model.” See at least [0107-0108]; Examiner Interpretation: The cost to fast action of the robot is equivalent to a weight for the utility. An energetic worker is equivalent to a working body with increased work efficiency. Cost to fast action is decreased when the worker is energetic, this is equivalent to the weight for the utility being decreased with increased work efficiency of the working body (worker). The cost is advantageous for the robot/human model such that the robot can more effectively to assist both efficient and inefficient human workers.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Murphy and Johnson to further include the teachings of Laftchiev with a reasonable expectation of success because “understanding of the typical performance and state of the human worker means that this information can be used in the collaboration between the robot and the human itself such that the combined human/robot performance can be improved. These improvements stein from an optimization of the interaction of the robot with the human worker.” (See at least [0009]) Regarding Claim 13, Murphy teaches A non-transitory computer readable storage medium storing a program executable by a computer to perform processing comprising: (“The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.” See at least col. 20, lines 54-59) receiving a detection signal output by a detection device having a detection range that includes a workspace of a robot and a working body other than the robot, which together perform cooperative work; (“The stereo camera devices 220(1)-(A) generally represent camera devices that are oriented so as to capture digital images of the item 250.” See at least col. 8, lines 18-31; “the robotic arm controller 390 could capture one or more images of the retrieval location (e.g., a particular holder or tote within an automated product distribution center) using one or more camera devices.” See at least col. 10, lines 58-63; See at least fig. 9 and col. 17, lines 1-9 for a second robotic arm performing cooperative work in the workspace.; Examiner Interpretation: The camera is a detection device and its range includes at least a portion of the workspace because it is able to capture images of objects, holders, and or totes within the workspace. The claim does not specify how much of the workspace the detection range includes, only that the workspace includes a robot and a another working body.) recognizing types and states of objects present in the workspace, based on the detection signal, wherein the objects include (“the robotic arm controller 390 could capture images of the item in the particular location and determine which of a plurality of pre-generated 3D models best matches the appearance of the item in the captured images. The robotic arm controller 390 could then retrieve the 3D model that best matches the item, for use in retrieving the item. Once the 3D model is retrieved, the robotic arm controller 390 can then use the retrieved 3D model 353 for one or more object identification operations. … the robotic arm controller 390 determines an estimated pose of the particular item at the designated location, using the 3D model.” See at least col. 10, line 64 through col. 11, line 30; Also see at least col. 16, lines 4-8 for item recognition for a second item.) generating, based on recognition results relating to the states of the target object (“The robotic arm controller 390 can then use the estimated pose of the particular item to determine an optimal way to control the robotic picking arm 392, in order to best retrieve the particular item. … The robotic arm controller 390 could determine a location of the optimal surface, based on the particular item's current pose, and manipulate the robotic picking arm 392 to grasp the particular item by the optimal surface from a determined optimal angle. … the robotic arm controller 390 determines an optimal manner to release a given object using a corresponding one of the 3D models 353.” See at least col. 11, lines 30-48) Murphy does not explicitly teach, Johnson teaches recognizing types and states of objects present in the workspace, based on the detection signal, wherein the objects include the working body other than the robot; … generating, based on recognition results relating to the states of … the working body other than the robot, an operation sequence for the robot; (“While, at points, embodiments are described herein as preventing robot-human collision, embodiments of the present disclosure are not so limited and can be used to prevent collision between robots and any objects. For example, embodiments can extend to objects other than people (e.g., animals) or other robots which are not necessarily networked to the current robot.” See at least [0046]; “The method 330, at 331, detects a type and a location of an object based on a camera image of the object. … The method 330 continues at 332 by predicting motion of the object based on at least one of (i) the detected type of the object, (ii) the detected location of the object, and (iii) a model of object motion. … To continue, at 333, a motion plan for a robot is generated that avoids having the robot collide with the object. According to an embodiment, the motion plan is generated at 333 based on the predicted motion of the object.” See at least [0051-0053]) wherein the other working body includes a plurality of other working bodies, (“controlling a robot to move in such a way so as to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task.” See at least [0046], wherein humans and other robots are a plurality of other working bodies.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Murphy to further include the teachings of Johnson with a reasonable expectation of success “to avoid collision with both static and moving obstacles, such as inanimate objects, humans, animals, or other robots, amongst other examples, while still accomplishing a task. Embodiments may be employed in shared workspaces so that a robot and a human co-worker can safely collaborate.” (See at least [0006]; Also see at least [0080] for improving trajectory generation.) Johnson also does not explicitly teach, but Laftchiev teaches wherein the processing comprises: designing a utility function to weight utilities for work of the plurality of other working bodies based on work efficiencies of the plurality of working bodies to assist a working body with a lower work efficiency among the work efficiencies, and wherein each weight for the utilities decreases with increasing work efficiency. (“a robot could be trained to slow its actions to match the worker actions thereby reducing errors. Alternatively, a robot could be trained to bring the parts closer to the worker to improve focus. Overall, the series of manufacturing events might be rerouted such that other workers replace some of the load observed by the current worker. This means that the statistical models monitoring the worker can also be used to improve the human-robot manufacturing process adaptively with respect to the worker condition.” See at least [0095]; “a predictive model can be learned to infer quantities like predicted completion time or predicted worker movements and a classification model to infer a state of the worker(s) and the task on which the worker is currently working. Then second, we initialize a control law or policy that provides the robot to perform reasonably well in the operations the robot has to accomplish to help the worker. … Here the robot only has one possible action speed up or slow down. The human has only two possible states tired or energetic. The robot must learn that when the human is energetic it can have a faster speed and when the human is tired it can have a slower speed. When the statistical models are learned the robot will learn that there is a higher cost to fast action if the worker is tired, and an optimal cost to lower speed. The same is true in reverse if the worker is energetic. … it is advantageous to first learn a model of the human state evolution and include this in the combined human/robot model.” See at least [0107-0108]; Examiner Interpretation: The cost to fast action of the robot is equivalent to a weight for the utility. An energetic worker is equivalent to a working body with increased work efficiency. Cost to fast action is decreased when the worker is energetic, this is equivalent to the weight for the utility being decreased with increased work efficiency of the working body (worker). The cost is advantageous for the robot/human model such that the robot can more effectively to assist both efficient and inefficient human workers.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of Murphy and Johnson to further include the teachings of Laftchiev with a reasonable expectation of success because “understanding of the typical performance and state of the human worker means that this information can be used in the collaboration between the robot and the human itself such that the combined human/robot performance can be improved. These improvements stein from an optimization of the interaction of the robot with the human worker.” (See at least [0009]) Claim(s) 9 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Murphy (US 10471597 B1) in view of Johnson (US 20200086487 A1), Laftchiev (US 20210173377 A1), and Sahin (NPL: “Multirobot Coordination with Counting Temporal Logics”). Regarding Claim 9, Murphy does not explicitly teach, but Sahin teaches wherein the at least one processor is configured to execute the instructions to convert an objective task, to be performed by the robot into a logical formula; (“we can specify tasks such as “All robots must avoid collisions with obstacles” or “At least five robots should eventually visit region A” using cLTL+. An inner logic formula over a set AP of atomic propositions is defined recursively as follows: PNG media_image1.png 19 303 media_image1.png Greyscale ” See at least page 4, col. 1, paragraph 1; Also see paragraph [0077] of the instant application that recites “It is noted that there are various existing technologies for the method of converting tasks expressed in natural language into logical formulas,”) generate, from the logical formula, a time step logical formula representing states at each time step for completing the objective task; (“A local counter is used to keep track of how far a robot has moved along its trajectory. If πn denotes the trajectory and kn denotes the local counter of robot Rn, the position of Rn at time t is given by πn(kn(t)). … Figure 1. Local counters are initially set as K(0) = [0 0 0] at time t = 0; that is, each robot Rn is initially positioned at πn(0). Every robot completes a transition by time t = 1, so local counters are updated as K(1) = [1 1 1]. The red and the blue robots move slower than expected and fail to complete two transitions by time t = 2. The green robot, on the other hand, successfully completes two transitions by time t = 2. Thus, local counters are updated as K(2) = [1 2 1].” See at least page 3, col. 1, Definitions 3 and 4. The trajectories are generated to satisfy specifications given in cLTL+ (see page 2, col. 2, paragraph 1) and therefore the time step logical formula including the counters is based on the logical formula in cLTL+.) and generate, based on the time step logical formula, the operation sequence as a sequence of subtasks to be executed by the robot. (“The trajectory πn corresponding to the sequence wn = wn(0)wn(1). . . can then be extracted by locating the nonzero component in each wn(t)” See at least page 5, col. 1, A. Globally synchronous robot dynamics) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Sahin with a reasonable expectation of success to provide a scalable solution to improve coordination of robots that does not require robots to be synchronized perfectly or communicate during runtime (See at least page 1, I. Introduction). Regarding Claim 11, Murphy further teaches wherein the at least one processor is configured to execute the instructions to define, based on the recognition results, an abstract state of the objects present in the workspace, (“the robotic arm controller 390 determines an estimated pose of the particular item at the designated location, using the 3D model.” See at least col. 11, lines 23-25, wherein locations/poses of the items are abstract states.) Murphy does not explicitly teach, but Sahin teaches define … an abstract state … as propositions to be used in the logical formula. (“The specification …, including: • collision with obstacles, which are marked with D, should be avoided.” See at least page 11, col. 2, A. Emergency response example and fig. 2. Examiner Interpretation: Zone D in fig. 2 represents the position of the obstacles and the specification defining that collision with the obstacles should be avoided is a proposition in a logical formula.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Sahin with a reasonable expectation of success to provide a scalable solution to improve coordination of robots that does not require robots to be synchronized perfectly or communicate during runtime (See at least page 1, I. Introduction). Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Murphy (US 10471597 B1) in view of Johnson (US 20200086487 A1), Laftchiev (US 20210173377 A1), Sahin (NPL: “Multirobot Coordination with Counting Temporal Logics”), Torii (US 20200282549 A1), and Linkowski (US 20210146546 A1). Regarding Claim 10, Murphy does not explicitly teach, but Laftchiev teaches design a utility function for the objective task; (“Given a discrete time dynamics such as (1) and a cost function, the algorithm computes local linear models and quadratic cost functions for the system along a trajectory.” See at least [0127]) and generate a control input (“These linear models are then used to compute optimal control inputs and local gain matrices by iteratively solving the associated LQG problem.” See at least [0127]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Laftchiev with a reasonable expectation of success because “understanding of the typical performance and state of the human worker means that this information can be used in the collaboration between the robot and the human itself such that the combined human/robot performance can be improved. These improvements stein from an optimization of the interaction of the robot with the human worker.” (See at least [0009]) Laftchiev does not explicitly teach, but Torii teaches and generate a control input for each time step for controlling the robot based on and the utility function, and wherein the at least one processor is configured to execute the instructions to generate the sequence of the subtasks based on the control input. (“the task management unit 110 manages the start time, the end time, and the execution period of a task which is allocated to the robot 1 and is to be executed (that is, a reserved state) or being executed.” See at least [0055]; “the robot management unit 140 may select the robot 2 that cooperates with the robot 1 on the basis of a plurality of perspectives such as the similarity and complementarity of the capability between each of the robots and the robot 1 and evaluation values or the like of the respective robots. Specifically, the robot management unit 140 may select, as the robot 2 that cooperates with the robot 1, a robot having a higher index that is calculated from a mathematical formula.” See at least [0087]; “the cooperation management unit 150 determines the configuration how each of the robots 1 and 2 cooperates, and outputs instructions of the determined configuration to each of the robots 1 and 2. Furthermore, the cooperation management unit 150 may also control the synchronization processing between the robot 1 and the robot 2 as the cooperation target.” See at least [0094]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified Murphy to further include the teachings of Torii with a reasonable expectation of success to improve flexibility of robot control by causing the robot to cooperate with another robot in an environment where the situation changes dynamically (see at least [0012] and [0082]). Torii also does not explicitly teach, but Linkowski teaches wherein the at least one processor is configured to execute the instructions to: generate a model of abstracted dynamics of the workspace; (“The dynamic workspace modules 310 is configured to generate a dynamic model of the works
Read full office action

Prosecution Timeline

Aug 15, 2022
Application Filed
Aug 08, 2024
Non-Final Rejection — §103
Oct 11, 2024
Interview Requested
Oct 24, 2024
Examiner Interview Summary
Oct 24, 2024
Applicant Interview (Telephonic)
Nov 13, 2024
Response Filed
Dec 11, 2024
Final Rejection — §103
Feb 13, 2025
Request for Continued Examination
Feb 14, 2025
Response after Non-Final Action
Apr 23, 2025
Non-Final Rejection — §103
Sep 30, 2025
Response Filed
Oct 24, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602054
CONTROL DEVICE FOR MOBILE OBJECT, CONTROL METHOD FOR MOBILE OBJECT, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12600037
REMOTE CONTROL ROBOT, REMOTE CONTROL ROBOT CONTROL SYSTEM, AND REMOTE CONTROL ROBOT CONTROL METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12589493
INFORMATION PROCESSING APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12566457
BULK STORE SLOPE ADJUSTMENT VIA TRAVERSAL INCITED SEDIMENT GRAVITY FLOW
2y 5m to grant Granted Mar 03, 2026
Patent 12552023
METHOD FOR CONTROLLING A ROBOT, AND SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
91%
With Interview (+21.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 143 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month