Prosecution Insights
Last updated: April 19, 2026
Application No. 17/781,119

METHOD, COMPUTER PROGRAM PRODUCT AND ROBOT CONTROLLER FOR CONFIGURING A ROBOT-OBJECT SYSTEM ENVIRONMENT, AND ROBOT

Non-Final OA §103
Filed
May 31, 2022
Examiner
KASPER, BYRON XAVIER
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Siemens Aktiengesellschaft
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
72 granted / 103 resolved
+17.9% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
36 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 103 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This communication is responsive to Application No. 17/781,119 and the amendments filed on 6/10/2025. 3. Claims 1-15 are presented for examination. Information Disclosure Statement 4. The information disclosure statement (IDS) submitted on 7/28/2022 has been fully considered by the Examiner. Response to Arguments 5. Applicant’s arguments, see page 11, filed 6/10/2025, with respect to the rejection of claims 1-15 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The rejection of claims 1-15 under 35 U.S.C. 112(b) of 4/14/2025 has been withdrawn. 6. Applicant’s arguments with respect to the rejection of claim(s) 1-15 under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Regarding independent claim 1, the Examiner agrees that the amendments overcome the previous combination of US 20200398435 A1 to Okura and US 9691151 B1 to Anderson-Sprecher. However, in light of the amendments and the Applicant’s remarks, an updated search was conducted, and a new ground of rejection concerning claim 1 has been determined, in which will be described later. Regarding independent claim 12, as this claim contains similar limitations as claim 1, is still rejected for similar reasons as claim 1 is, in which will be described later. Regarding dependent claims 2-11 and 13-15, as all of these claims depend from either claims 1 or 12, are still rejected, in which will be described later. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 9. Claim(s) 1 and 11-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bai (US 11494632 B1 hereinafter Bai) in view of Corkum et al. (US 20180126553 A1 hereinafter Corkum) and Ohashi et al. (JP 2005088175 A hereinafter Ohashi). Regarding Claim 1, Bai teaches a method for configuring a robot-object system environment having at least one object and a robot for manipulating (Col. 10 lines 38-47, where “During each episode, the robot 180 (or another robot) is controlled to cause the robot to attempt performance of a robotic task. … As one non-limiting example, the robotic task can be a grasping task where the robot 180 attempts to grasp one (e.g., any one) of the objects 192 utilizing the end effector 182. For instance, at the start of each episode, the robot 180 can be in a corresponding starting pose (e.g., a pseudo-randomly determined pose).”) and capturing objects (Col. 9 lines 48-60, where “In other implementations, the vision component 184 can alternatively be physically coupled to the robot 180. … Vision component 184 includes one or more sensors and generates data frames (e.g., images or point clouds) related to shape, color, depth, and/or other features of object(s) that are in the line of sight of the sensor(s).”), in which a digital robot twin, which digitally represents the robot-object system environment and controls the robot for manipulating objects on the basis of a control program (Col. 12 lines 24-32, where “The simulator 120 is implemented by one or more computer systems and is used to simulate various environments that include corresponding environmental objects, to simulate a robot operating in the simulated environment (e.g., to simulate robot 180), to simulate responses of the robot in response to virtual implementation of various simulated robotic actions, and to simulate interactions between the simulated robot and the simulated environmental objects in response to the simulated robotic actions.”), (Col. 13 lines 13-26, where “ In performing each such simulated episode, the simulated episode engine 124 further attempts to control a simulated robot to mimic the movements of the robot 180 in the corresponding episode data instance. For example, the simulated episode engine 124 can control a simulated robot to cause the simulated robot to traverse the trajectory defined by the trajectory data of the episode data instance. In these manners, in performing such a simulated episode, the simulated episode engine 124 attempts to simulate a corresponding one of the real episodes by configuring the environment based on the environmental data of the episode data instance of the real episode and by controlling the simulated robot in conformance with the robot data of the episode data instance of the real episode.”), is synchronized for use of the robot in the robot-object system environment when manipulating objects, wherein the digital robot twin is synchronized, in two stages (Col. 2 lines 4-17, where “As used herein, the “reality gap” is a difference that exists between real robots and real environments—and simulated robots and simulated environments simulated by a robotic simulator. … In some of those implementations, multiple iterations of quantifying the reality gap and adapting parameter(s) of a robotic simulator are performed, until it is determined that the reality gap achieved with certain parameter(s) satisfies the one or more criteria.”), (Col. 8 line 64 – Col. 9 line 13, where “Training of machine learning models that are robust and accurate, and that can be utilized for control of real-world physical robots, is often limited by the scalability of using real-world physical robots to generate a sufficient quantity of training examples and/or to generate training examples that are sufficiently diverse. … Implementations described herein present techniques for adapting parameter(s) of a robotic simulator to reduce the reality gap between the robotic simulator and real-world physical robot(s) and/or a real-world environment. The robotic simulator with the adapted parameters can then be used in generating simulated training examples. The simulated training examples can be used in training of one or more machine learning models that can be used in the control of real-world physical robots.”), (Note: The Examiner interprets reducing the “reality gap” of Bai as synchronizing the digital robot twin.), wherein a) in a first stage, each object in the robot-object system environment is optically captured with respect to an object position during the control program run until (Col. 11 lines 35-41, where “The environmental data engine 114 can utilize one or more techniques in determining poses of environmental objects. For example, the environmental data engine 114 can compare point cloud data generated by the vision component 184 to a stored object model of the container 191 and/or to stored object models of the objects 192 to determine 6D poses of the container 191 and/or the objects 192.”). Bai is silent on a1) the position of the object has been determined with sufficient accuracy for a first-stage accuracy requirement based on a first-stage robot-object minimum distance, or a2) an improvement in the accuracy of the object position in the digital robot twin is determined to be performed by a second stage with regard to the synchronization, and b) in the second stage, each object in the robot-object system environment is captured with respect to an object position during the control program run by determining an object pose distribution or by determining an object pose distribution and robot contact with the object until b1) the position of the object has been determined with sufficient accuracy for a second-stage accuracy requirement based on a second-stage robot-object minimum distance, wherein the second-stage robot-object minimum distance is determined based on a plurality of probabilistic pose distributions, or b2) no improvement in the accuracy of the object position in the digital robot twin is determined to be performed. However, Corkum teaches a1) the position of the object has been determined with sufficient accuracy for a first-stage accuracy requirement based on a first-stage robot-object minimum distance, or a2) an improvement in the accuracy of the object position in the digital robot twin is determined to be performed by a second stage with regard to the synchronization ([0040] via “In particular, when an object feature is detected in an image output by the camera 150 and/or when this object feature detected in the image is of sufficient resolution (e.g., “large enough”) to reliably define an object reference frame, the controller 160 can locate an object reference frame according to this object feature, such as by virtually locating an origin of the object reference frame on this object feature and aligning axes of the object reference frame to one or more axes of the object feature. The controller 160 can thus register motion of the end effector 140 to this object reference frame, such as by calculating poses of the end effector 140 within the object reference frame based on the position, size, skew, and orientation of the object feature detected in the field of view of the camera 150 and then implementing closed-loop controls to move the end effector 140 along a preplanned trajectory projected into the object reference feature.”), ([0112] via “Once the arm reaches the second position, the camera 150 can record a second image; and the controller 160 can scan the second image for the screw, as described above. In particular, because the second position is nearer the screw than the first position, the second image may represent the screw at a greater resolution than the first image. The controller 160 can thus implement the foregoing methods and techniques to calculate a third position for the arm along or near the preplanned trajectory—based on the position of the screw represented in the second image—to bring the screwdriver into position to precisely engage the screw.”), (Note: The Examiner interprets Corkum to teach step a1).). Further Ohashi teaches b) in the second stage, each object in the robot-object system environment is captured with respect to an object position during the control program run by determining an object pose distribution or by determining an object pose distribution and robot contact with the object (Page 12 paragraph 4 via “First, as shown in FIG. 9A, it is assumed that the predicted trajectory K of the object 20 calculated from the position measurement also has a probability distribution K (x, t). When the ball 20 is at a position away from the robot apparatus 1, the position measurement accuracy of the ball (object) 20 is low, so the predicted trajectory of the object calculated from the position measurement and the range of the probability distribution K (x, t) are wide. Become. Therefore, the existence probability distribution Od (x, t) of the ball 20 on the hitting surface D (x) = 0 is wide. The existence probability distribution Od (x, t) is determined by the probability distribution K (x, t) of the predicted trajectory and the existence probability distribution O (x, t) of the ball 20.”) until b1) the position of the object has been determined with sufficient accuracy for a second-stage accuracy requirement based on a second-stage robot-object minimum distance, wherein the second-stage robot-object minimum distance is determined based on a plurality of probabilistic pose distributions, or b2) no improvement in the accuracy of the object position in the digital robot twin is determined to be performed (Page 9 paragraph 8 via “Approach method based on the movable range distribution H (x) of the upper body and the movable range distribution (hereinafter referred to as the movable range distribution of the lower body) L (x) by the lower body motion of the center of gravity Hc of the movable range distribution H (x). ... As shown in FIG. 5A, when the robot apparatus 1 is at a distance LE far from the target object 10, the position measurement accuracy of the target object 10 is low, and therefore the existence probability distribution O () where the target object 10 is present exists. x) is larger than the actual volume of the object 10.”), (Page 10 paragraph 3 via “During operation approach gait evaluation value of the above formula (1), always or periodically calculated, if the evaluation value does not exceed the threshold Vl .sub.1 switching, most approaches movable range is wide control accuracy due to lower walking Do. Since the position measurement accuracy of the target object 10 increases as the robot apparatus 1 approaches the target object 10, the existence probability distribution O (x) of the target object becomes narrow. In the approach action by walking, the movable range distribution L (x) of the lower body is assumed to indicate the distribution range when standing upright.”), (Note: The Examiner interprets Ohashi to teach step b1). Also, see Figures 5 and 9 of Ohashi, reproduced below.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Corkum wherein a1) the position of the object has been determined with sufficient accuracy for a first-stage accuracy requirement based on a first-stage robot-object minimum distance, or a2) an improvement in the accuracy of the object position in the digital robot twin is determined to be performed by a second stage with regard to the synchronization. Doing so increases the accuracy in detecting the position of the object by brining the camera closer to the object, as stated by Corkum ([0017] via “By thus realigning the preplanned trajectory to the target object (e.g., to an object feature or constellation of object features) detected in the field of view of the camera 150 as the end effector 140 approaches the target object, the system 100 can achieve increased locational accuracy of the end effector 140 relative to the target object as the end effector 140 nears the target object while also accommodating wide variances in the location and orientation of the target object from its expected location and orientation and/or accommodating wide variances in the location and orientation of one unit of the target object to a next unit of the target object.”). In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ohashi wherein b) in the second stage, each object in the robot-object system environment is captured with respect to an object position during the control program run by determining an object pose distribution or by determining an object pose distribution and robot contact with the object until b1) the position of the object has been determined with sufficient accuracy for a second-stage accuracy requirement based on a second-stage robot-object minimum distance, wherein the second-stage robot-object minimum distance is determined based on a plurality of probabilistic pose distributions, or b2) no improvement in the accuracy of the object position in the digital robot twin is determined to be performed. Doing so continually updates the control of the robot based on an ever-changing positional accuracy of the object, as stated by Ohashi (Page 10 paragraph 8 – Page 11 paragraph 1 via “Thus, based on the distance to the object 10, the object existence probability distribution O (x) indicating the position measurement accuracy of the object 10 of the robot apparatus 1 is obtained, and this existence probability distribution O (x) Based on the evaluation value obtained from the movable range distribution in motion, it is possible to control while taking into account the control accuracy of each approach motion by switching the motion step by step in the order from low control accuracy to high control accuracy. When the control attention point is finally brought into contact with the object 10, the part with the highest redundancy in the movable range can be moved to the movement target position, and the object 10 can be gripped.”). PNG media_image1.png 616 448 media_image1.png Greyscale Figure 5 of Ohashi PNG media_image2.png 602 942 media_image2.png Greyscale Figure 9 of Ohashi Regarding Claim 11, modified reference Bai teaches the method as claimed in claim 1, wherein after synchronization, the digital robot twin, including the control program controlling the robot for manipulating objects, geometrical data relating to the robot-object system environment, a process requirement and/or uncertainty statements for the objects in the robot-object system environment, is updated (Col. 12 lines 40-59, where “As described in more detail below, the parameters dictated by the configuration engine 122 during a given simulated episode can be adapted based on feedback from the sim modification system 130, which causes the configuration engine 122 to iteratively adapt one or more parameters based on determinations of reality measures as described herein. … Simulated robot parameters can include, for example, friction coefficients for simulated gripper(s) of the simulated robot, modeling (e.g., number of joint(s)) of simulated gripper(s) of the simulated robot, control parameter(s) for the simulated gripper(s), control parameter(s) for simulated actuator(s) of the simulated robot, etc. Environmental parameters can include, for example, friction coefficient(s) for simulated environmental object(s), size and/or pose of fixed simulated environmental object(s), simulated object model(s) utilized, etc.”). Regarding Claim 12, Bai teaches a computer program product, comprising a non-transitory computer readable hardware storage device having computer readable program code stored therein, said program code having a program module executable by a processor of a computer system to implement a method (Col. 8 lines 15-21, where “Other implementations may include a non-transitory computer readable storage medium storing instructions executable by one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s)) to perform a method such as one or more of the methods described above and/or elsewhere herein.”), (Col. 23 lines 16-20, where “Storage subsystem 724 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 724 may include the logic to perform selected aspects of one or more methods described herein.”) for configuring a robot-object system environment which has at least one object and a robot for manipulating (Col. 10 lines 38-47, where “During each episode, the robot 180 (or another robot) is controlled to cause the robot to attempt performance of a robotic task. ... As one non-limiting example, the robotic task can be a grasping task where the robot 180 attempts to grasp one (e.g., any one) of the objects 192 utilizing the end effector 182. For instance, at the start of each episode, the robot 180 can be in a corresponding starting pose (e.g., a pseudo-randomly determined pose).”) and capturing objects (Col. 9 lines 48-60, where “In other implementations, the vision component 184 can alternatively be physically coupled to the robot 180. … Vision component 184 includes one or more sensors and generates data frames (e.g., images or point clouds) related to shape, color, depth, and/or other features of object(s) that are in the line of sight of the sensor(s).”) and in the process synchronizes a digital robot twin, which digitally represents the robot-object system environment and controls the robot for manipulating objects on the basis of a control program, for use of the robot in the robot-object system environment when manipulating objects (Col. 12 lines 24-32, where “The simulator 120 is implemented by one or more computer systems and is used to simulate various environments that include corresponding environmental objects, to simulate a robot operating in the simulated environment (e.g., to simulate robot 180), to simulate responses of the robot in response to virtual implementation of various simulated robotic actions, and to simulate interactions between the simulated robot and the simulated environmental objects in response to the simulated robotic actions.”), (Col. 13 lines 13-26, where “ In performing each such simulated episode, the simulated episode engine 124 further attempts to control a simulated robot to mimic the movements of the robot 180 in the corresponding episode data instance. For example, the simulated episode engine 124 can control a simulated robot to cause the simulated robot to traverse the trajectory defined by the trajectory data of the episode data instance. In these manners, in performing such a simulated episode, the simulated episode engine 124 attempts to simulate a corresponding one of the real episodes by configuring the environment based on the environmental data of the episode data instance of the real episode and by controlling the simulated robot in conformance with the robot data of the episode data instance of the real episode.”), wherein the program module is created and the processor executing the control program instructions of the program module for configuring the robot-object system environment is configured in such a manner that the digital robot twin is synchronized, in two stages (Col. 2 lines 4-17, where “As used herein, the “reality gap” is a difference that exists between real robots and real environments—and simulated robots and simulated environments simulated by a robotic simulator. … In some of those implementations, multiple iterations of quantifying the reality gap and adapting parameter(s) of a robotic simulator are performed, until it is determined that the reality gap achieved with certain parameter(s) satisfies the one or more criteria.”), (Col. 8 line 64 – Col. 9 line 13, where “Training of machine learning models that are robust and accurate, and that can be utilized for control of real-world physical robots, is often limited by the scalability of using real-world physical robots to generate a sufficient quantity of training examples and/or to generate training examples that are sufficiently diverse. … Implementations described herein present techniques for adapting parameter(s) of a robotic simulator to reduce the reality gap between the robotic simulator and real-world physical robot(s) and/or a real-world environment. The robotic simulator with the adapted parameters can then be used in generating simulated training examples. The simulated training examples can be used in training of one or more machine learning models that can be used in the control of real-world physical robots.”), (Note: The Examiner interprets reducing the “reality gap” of Bai as synchronizing the digital robot twin.), wherein a) in a first stage, each object in the robot-object system environment is optically captured with respect to an object position during the control program run until (Col. 11 lines 35-41, where “The environmental data engine 114 can utilize one or more techniques in determining poses of environmental objects. For example, the environmental data engine 114 can compare point cloud data generated by the vision component 184 to a stored object model of the container 191 and/or to stored object models of the objects 192 to determine 6D poses of the container 191 and/or the objects 192.”). Bai is silent on a1) the position of the object has been determined with sufficient accuracy for a first-stage accuracy requirement based on a first-stage robot-object minimum distance, or a2) an improvement in the accuracy of the object position in the digital robot twin is determined to be performed by a second stage with regard to the synchronization, and b) in the second stage, each object in the robot-object system environment is captured with respect to an object position during the control program run by determining an object pose distribution or by determining an object pose distribution and robot contact with the object until b1) the position of the object has been determined with sufficient accuracy for a second-stage accuracy requirement based on a second-stage robot-object minimum distance, wherein the second-stage robot-object minimum distance is determined based on a plurality of probabilistic pose distributions, or b2) no improvement in the accuracy of the object position in the digital robot twin is determined to be performed. However, Corkum teaches a1) the position of the object has been determined with sufficient accuracy for a first-stage accuracy requirement based on a first-stage robot-object minimum distance, or a2) an improvement in the accuracy of the object position in the digital robot twin is determined to be performed by a second stage with regard to the synchronization ([0040] via “In particular, when an object feature is detected in an image output by the camera 150 and/or when this object feature detected in the image is of sufficient resolution (e.g., “large enough”) to reliably define an object reference frame, the controller 160 can locate an object reference frame according to this object feature, such as by virtually locating an origin of the object reference frame on this object feature and aligning axes of the object reference frame to one or more axes of the object feature. The controller 160 can thus register motion of the end effector 140 to this object reference frame, such as by calculating poses of the end effector 140 within the object reference frame based on the position, size, skew, and orientation of the object feature detected in the field of view of the camera 150 and then implementing closed-loop controls to move the end effector 140 along a preplanned trajectory projected into the object reference feature.”), ([0112] via “Once the arm reaches the second position, the camera 150 can record a second image; and the controller 160 can scan the second image for the screw, as described above. In particular, because the second position is nearer the screw than the first position, the second image may represent the screw at a greater resolution than the first image. The controller 160 can thus implement the foregoing methods and techniques to calculate a third position for the arm along or near the preplanned trajectory—based on the position of the screw represented in the second image—to bring the screwdriver into position to precisely engage the screw.”), (Note: The Examiner interprets Corkum to teach step a1).). Further, Ohashi teaches b) in the second stage, each object in the robot-object system environment is captured with respect to an object position during the control program run by determining an object pose distribution or by determining an object pose distribution and robot contact with the object (Page 12 paragraph 4 via “First, as shown in FIG. 9A, it is assumed that the predicted trajectory K of the object 20 calculated from the position measurement also has a probability distribution K (x, t). When the ball 20 is at a position away from the robot apparatus 1, the position measurement accuracy of the ball (object) 20 is low, so the predicted trajectory of the object calculated from the position measurement and the range of the probability distribution K (x, t) are wide. Become. Therefore, the existence probability distribution Od (x, t) of the ball 20 on the hitting surface D (x) = 0 is wide. The existence probability distribution Od (x, t) is determined by the probability distribution K (x, t) of the predicted trajectory and the existence probability distribution O (x, t) of the ball 20.”) until b1) the position of the object has been determined with sufficient accuracy for a second-stage accuracy requirement based on a second-stage robot-object minimum distance, wherein the second-stage robot-object minimum distance is determined based on a plurality of probabilistic pose distributions, or b2) no improvement in the accuracy of the object position in the digital robot twin is determined to be performed (Page 9 paragraph 8 via “Approach method based on the movable range distribution H (x) of the upper body and the movable range distribution (hereinafter referred to as the movable range distribution of the lower body) L (x) by the lower body motion of the center of gravity Hc of the movable range distribution H (x). ... As shown in FIG. 5A, when the robot apparatus 1 is at a distance LE far from the target object 10, the position measurement accuracy of the target object 10 is low, and therefore the existence probability distribution O () where the target object 10 is present exists. x) is larger than the actual volume of the object 10.”), (Page 10 paragraph 3 via “During operation approach gait evaluation value of the above formula (1), always or periodically calculated, if the evaluation value does not exceed the threshold Vl .sub.1 switching, most approaches movable range is wide control accuracy due to lower walking Do. Since the position measurement accuracy of the target object 10 increases as the robot apparatus 1 approaches the target object 10, the existence probability distribution O (x) of the target object becomes narrow. In the approach action by walking, the movable range distribution L (x) of the lower body is assumed to indicate the distribution range when standing upright.”), (Note: The Examiner interprets Ohashi to teach step b1). Also, see Figures 5 and 9 of Ohashi, reproduced above.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Corkum wherein a1) the position of the object has been determined with sufficient accuracy for a first-stage accuracy requirement based on a first-stage robot-object minimum distance, or a2) an improvement in the accuracy of the object position in the digital robot twin is determined to be performed by a second stage with regard to the synchronization. Doing so increases the accuracy in detecting the position of the object by bringing the camera closer to the object, as stated by Corkum ([0017] via “By thus realigning the preplanned trajectory to the target object (e.g., to an object feature or constellation of object features) detected in the field of view of the camera 150 as the end effector 140 approaches the target object, the system 100 can achieve increased locational accuracy of the end effector 140 relative to the target object as the end effector 140 nears the target object while also accommodating wide variances in the location and orientation of the target object from its expected location and orientation and/or accommodating wide variances in the location and orientation of one unit of the target object to a next unit of the target object.”). In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ohashi wherein b) in the second stage, each object in the robot-object system environment is captured with respect to an object position during the control program run by determining an object pose distribution or by determining an object pose distribution and robot contact with the object until b1) the position of the object has been determined with sufficient accuracy for a second-stage accuracy requirement based on a second-stage robot-object minimum distance, wherein the second-stage robot-object minimum distance is determined based on a plurality of probabilistic pose distributions, or b2) no improvement in the accuracy of the object position in the digital robot twin is determined to be performed. Doing so continually updates the control of the robot based on an ever-changing positional accuracy of the object, as stated by Ohashi (Page 10 paragraph 8 – Page 11 paragraph 1 via “Thus, based on the distance to the object 10, the object existence probability distribution O (x) indicating the position measurement accuracy of the object 10 of the robot apparatus 1 is obtained, and this existence probability distribution O (x) Based on the evaluation value obtained from the movable range distribution in motion, it is possible to control while taking into account the control accuracy of each approach motion by switching the motion step by step in the order from low control accuracy to high control accuracy. When the control attention point is finally brought into contact with the object 10, the part with the highest redundancy in the movable range can be moved to the movement target position, and the object 10 can be gripped.”). Regarding Claim 13, modified reference Bai teaches the computer program product as claimed in claim 12, wherein the program module is created and the processor is configured in such a manner that the method steps of the method are carried out (Col. 8 lines 15-21, where “Other implementations may include a non-transitory computer readable storage medium storing instructions executable by one or more processors (e.g., central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s)) to perform a method such as one or more of the methods described above and/or elsewhere herein.”). Regarding Claim 14, modified reference Bai teaches a robot controller for configuring a robot-object system environment which has at least one object and a robot for manipulating (Col. 10 lines 38-47, where “During each episode, the robot 180 (or another robot) is controlled to cause the robot to attempt performance of a robotic task. … As one non-limiting example, the robotic task can be a grasping task where the robot 180 attempts to grasp one (e.g., any one) of the objects 192 utilizing the end effector 182. For instance, at the start of each episode, the robot 180 can be in a corresponding starting pose (e.g., a pseudo-randomly determined pose).”) and capturing objects (Col. 9 lines 48-60, where “In other implementations, the vision component 184 can alternatively be physically coupled to the robot 180. … Vision component 184 includes one or more sensors and generates data frames (e.g., images or point clouds) related to shape, color, depth, and/or other features of object(s) that are in the line of sight of the sensor(s).”), having a digital robot twin (Col. 12 lines 24-32, where “The simulator 120 is implemented by one or more computer systems and is used to simulate various environments that include corresponding environmental objects, to simulate a robot operating in the simulated environment (e.g., to simulate robot 180), to simulate responses of the robot in response to virtual implementation of various simulated robotic actions, and to simulate interactions between the simulated robot and the simulated environmental objects in response to the simulated robotic actions.”), (Col. 13 lines 13-26, where “ In performing each such simulated episode, the simulated episode engine 124 further attempts to control a simulated robot to mimic the movements of the robot 180 in the corresponding episode data instance. For example, the simulated episode engine 124 can control a simulated robot to cause the simulated robot to traverse the trajectory defined by the trajectory data of the episode data instance. In these manners, in performing such a simulated episode, the simulated episode engine 124 attempts to simulate a corresponding one of the real episodes by configuring the environment based on the environmental data of the episode data instance of the real episode and by controlling the simulated robot in conformance with the robot data of the episode data instance of the real episode.”) which contains a control program, which controls the robot in the robot-object system environment when manipulating objects (Col. 23 lines 16-20, where “Storage subsystem 724 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 724 may include the logic to perform selected aspects of one or more methods described herein.”), and a data memory, which contains geometrical data relating to the robot-object system environment, and having a configuration data memory (Col. 11 lines 35-41, where “The environmental data engine 114 can utilize one or more techniques in determining poses of environmental objects. For example, the environmental data engine 114 can compare point cloud data generated by the vision component 184 to a stored object model of the container 191 and/or to stored object models of the objects 192 to determine 6D poses of the container 191 and/or the objects 192.”), (Col. 23 lines 21-27, where “These software modules are generally executed by processor 714 alone or in combination with other processors. Memory 725 used in the storage subsystem 724 can include a number of memories including … in which fixed instructions are stored.”), wherein a computer program product as claimed in claim 12 for carrying out the method, which with the digital robot twin and the configuration data memory, forms a functional unit for configuring the robot-object system environment (See rejection of claim 12 under 35 U.S.C. 103 above.). Regarding Claim 15, modified reference Bai teaches a robot having a robot controller as claimed in claim 14 (Col. 9 lines 34-37, where “Robot 180 is a “robot arm” having multiple degrees of freedom to enable traversal of grasping end effector 182 along any of a plurality of potential paths to position the grasping end effector 182 in desired locations.”), (Col. 21 lines 35-42, where “If the system determines at block 560, that further training should not occur, the system proceeds to block 562 and uses the trained machine learning model in the control of one or more real robots. For example, the trained machine learning model can be stored locally on one or more computer readable media of a real robot, and utilized by a control system of the real robot in one or more aspects of control of the real robot by the control system.”). 10. Claim(s) 2-6 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bai (US 11494632 B1 hereinafter Bai) in view of Corkum et al. (US 20180126553 A1 hereinafter Corkum) and Ohashi et al. (JP 2005088175 A hereinafter Ohashi), and further in view of Sturrock et al. (US 20090089027 A1 hereinafter Sturrock). Regarding Claim 2, modified reference Bai teaches the method as claimed in claim 1, wherein a first polling loop is run through for each object in the first stage, in which a) a first-stage uncertainty is estimated in a first instruction block during each run by comparing environment measurement data determined when optically capturing objects with first simulation measurement data from a first-stage digital robot twin (Col. 14 lines 24-46, where “When the reality measure determined by the reality measure engine 132 fails to satisfy a threshold and/or other criterion/criteria, the sim modification engine 134 of the sim modification system 130 can modify one or more parameters utilized by the configuration engine 122 during the simulated episodes utilized to determine the reality measure, and provide feedback to the configuration engine 122 to cause the configuration engine 122 to modify the parameters. Various parameters can be modified, such as simulated robot parameters of the simulated robot and/or environmental parameters that dictate one or more properties of one or more simulated environmental objects. … In manual and/or automated techniques, a quantity of parameters modified and/or extent(s) of the modification(s) can optionally be based on the reality measure. For example, derivative free optimization techniques can modify parameter(s) more aggressively when the reality measure is indicative of a relatively large reality gap, as compared to when the reality measure is indicative of a relatively smaller reality gap.”), (Col. 19 line 39 – Col. 20 line 12, where “One example of block 362 is described with reference to FIG. 4. In FIG. 4, a confusion matrix is illustrated and includes four separate blocks. … Accordingly, blocks A and D indicate a quantity of occurrences where the real and simulated success measures agree, and blocks B and C indicate a quantity of occurrences where the real and simulated success measures are in conflict. As illustrated in FIG. 4, one option for determining the reality measure based on such a confusion matrix is dividing the quantity of occurrences where the real and simulated success measures agree (“A+D”) by the total quantity of simulated episodes (indicated by “A+B+C+D”).”), (Note: See Figure 3 step 362 and Figure 4 of Bai as well.), b) the first-stage accuracy requirement is determined in a second instruction block, which is run through after the first instruction block, during each run (Col. 20 lines 13-19, where “After block 362, the system proceeds to block 364. At block 364, the system determines whether the reality measure satisfies a threshold and/or other criterion/criteria. As one non-limiting example, the threshold can be greater than 90%, using the equation of FIG. 4. If not, the system proceeds to block 366 and modifies one or more parameters for the simulator.”), (Note: See Figure 3 step 364 of Bai as well.), and d) when running through the first polling loop in a first instruction correction block, object position data relating to the first-stage digital robot twin are updated by applying object pose estimation methods to the environment measurement data to thus reduce the first-stage uncertainty (Col. 20 lines 13-31, where “After block 362, the system proceeds to block 364. At block 364, the system determines whether the reality measure satisfies a threshold and/or other criterion/criteria. As one non-limiting example, the threshold can be greater than 90%, using the equation of FIG. 4. If not, the system proceeds to block 366 and modifies one or more parameters for the simulator. At block 366, the system modifies one or more parameters for a robotic simulator. As described herein, which parameters are modified, and/or the extent(s) of modification(s) can optionally be based on the reality measure. After block 366, the system again performs multiple iterations of blocks 352, 354, 356, 358, and 360 utilizing the parameters for the simulator, as modified in the most recent iteration of block 366. The system will then perform an additional iteration of block 362, and again perform another iteration of block 364. If, at an iteration of block 364, the system determines the reality measure satisfies the threshold and/or other criteria/criterion, the system proceeds to block 368.”), (Note: See Figure 3 of Bai as well.). Bai is silent on c) loop run conditions are checked in a first-stage loop poll, wherein there is a change from the first stage to the second stage on account of a first loop run condition check, the first polling loop is run through on account of a second loop run condition check, the synchronization of the digital robot twin has been successfully carried out and is therefore ended on account of a third loop run condition check, and the synchronization of the digital robot twin cannot be successfully carried out on account of a fourth loop run condition check and is therefore aborted, and user actions are therefore required. However, Sturrock teaches c) loop run conditions are checked in a first-stage loop poll, wherein there is a change from the first stage to the second stage on account of a first loop run condition check, the first polling loop is run through on account of a second loop run condition check, the synchronization of the digital robot twin has been successfully carried out and is therefore ended on account of a third loop run condition check, and the synchronization of the digital robot twin cannot be successfully carried out on account of a fourth loop run condition check and is therefore aborted, and user actions are therefore required ([0028] via “When the simulation models 130 have been identified and gathered, the simulation component 110 executes a simulation based on the simulation models, stores and returns the result of the simulation at 120. By storing the results of the simulations, users can quickly identify failed or successful simulations, as well as simulation models that are similar to the current simulation for comparative purposes. If a problem occurs during simulation or the simulation fails, the simulation component 110 identifies the particular simulation models that were the root of the failure. In one aspect, the simulation component 110 simulates to the smallest level of granularity to facilitate the most accurate simulation possible. However, if a particular combination of simulation models has been run repeatedly, the simulation component 110 can identify this through the simulation database, notify the user that a repeated simulation has been executed, and refrain simulating that portion of the model (perhaps after prompting the user for permission).”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Sturrock wherein c) loop run conditions are checked in a first-stage loop poll, wherein there is a change from the first stage to the second stage on account of a first loop run condition check, the first polling loop is run through on account of a second loop run condition check, the synchronization of the digital robot twin has been successfully carried out and is therefore ended on account of a third loop run condition check, and the synchronization of the digital robot twin cannot be successfully carried out on account of a fourth loop run condition check and is therefore aborted, and user actions are therefore required. Doing so allows users to access the simulations if the simulations fail, reducing costs and inefficient uses of time with respect to failed simulations, as stated by Sturrock ([0062] via “By storing the results of the simulations, users can quickly identify failed or successful simulations, as well as simulation models that are similar to the current simulation for comparative purposes. … However, if a particular combination of simulation models has been run repeatedly, the simulation component 830 can identify this through the simulation database 810, notify the user that a repeated simulation has been executed, and refrain simulating that portion of the model after prompting the user for permission. This enables users to access a simulation history efficiently and circumvent costs or inefficient use of time associated with duplicate or even substantially similar simulations. Note that if multiple manufacturing paths exist, the simulation component 802 can simulate various paths and present the user with several options.”). Regarding Claim 3, modified reference Bai teaches the method as claimed in claim 2, wherein dedicated instruction steps are carried out when running through the first instruction block, including: a first instruction step for planning a robot trajectory to capture a scene of the robot-object system environment assuming discrepancies between the digital robot twin and the robot-object system environment (Col. 17 lines 21-27, where “At block 252, a real physical robot performs an episode of a robotic task. The robotic task can be, for example, a manipulation task. For instance, the manipulation task can be a grasping task in which the real physical robot traverses a corresponding trajectory and attempts to interact with one or more corresponding environmental objects in an attempt to grasp the one or more environmental objects.”), (Col. 17 lines 41-46, where “At block 2542, the system stores environmental data for the episode data instance. The environmental data can define a beginning environmental state (e.g., 6D pose) for one or more (e.g., all) environmental objects at the start of the episode, such as environmental objects in a work space of the real physical robot during the episode.”), (Col. 19 line 39 – Col. 20 line 12, where “One example of block 362 is described with reference to FIG. 4. In FIG. 4, a confusion matrix is illustrated and includes four separate blocks. … Accordingly, blocks A and D indicate a quantity of occurrences where the real and simulated success measures agree, and blocks B and C indicate a quantity of occurrences where the real and simulated success measures are in conflict. As illustrated in FIG. 4, one option for determining the reality measure based on such a confusion matrix is dividing the quantity of occurrences where the real and simulated success measures agree (“A+D”) by the total quantity of simulated episodes
Read full office action

Prosecution Timeline

May 31, 2022
Application Filed
May 31, 2022
Response after Non-Final Action
Oct 01, 2024
Non-Final Rejection — §103
Dec 20, 2024
Response Filed
Apr 05, 2025
Final Rejection — §103
May 20, 2025
Examiner Interview Summary
May 20, 2025
Applicant Interview (Telephonic)
Jun 10, 2025
Response after Non-Final Action
Jul 07, 2025
Request for Continued Examination
Jul 10, 2025
Response after Non-Final Action
Aug 27, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594964
METHOD OF AND SYSTEM FOR GENERATING REFERENCE PATH OF SELF DRIVING CAR (SDC)
2y 5m to grant Granted Apr 07, 2026
Patent 12594137
HARD STOP PROTECTION SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12583101
METHOD FOR OPERATING A MODULAR ROBOT, MODULAR ROBOT, COLLISION AVOIDANCE SYSTEM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 24, 2026
Patent 12576529
ROBOT SIMULATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12564962
ROBOT REMOTE OPERATION CONTROL DEVICE, ROBOT REMOTE OPERATION CONTROL SYSTEM, ROBOT REMOTE OPERATION CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
88%
With Interview (+18.4%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 103 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month