Prosecution Insights
Last updated: April 19, 2026
Application No. 18/936,675

SIMULATING MULTIPLE ROBOTS IN VIRTUAL ENVIRONMENTS

Non-Final OA §102§DP
Filed
Nov 04, 2024
Examiner
AZHAR, ARSLAN
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Gdm Holding LLC
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
144 granted / 187 resolved
+25.0% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
30 currently pending
Career history
217
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
16.3%
-23.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 187 resolved cases

Office Action

§102 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/04/2024, 01/06/2025, 05/27/2025 and 01/05/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 9, 10, 11 and 17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shah (US 20200276708, disclosed in IDS submitted on 11/04/2024). For claim 1, Shah teaches ([0111], disclosing a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described). A method implemented using one or more processors, comprising: simulating a three-dimensional virtual environment that includes an interactive object ([0031], disclosing graphics engine can, using one or more graphics processing units, generate three-dimensional graphics depicting the virtual environment, using techniques including three-dimensional structure generation. [0038], disclosing generated robot can include or more end-effector components configured to perform one or more functions with regards to the virtual environment (such as a scoop component configured to scoop dirt and sand, a drill component configured to break up materials within the virtual environment, a gripper or claw component, and the like) (collectively, “effector components” or “effectors” hereinafter)), wherein the three-dimensional virtual environment includes a plurality of robot avatars that are controlled by a corresponding plurality of robot controllers that are external from the three-dimensional virtual environment, wherein a given robot controller of the plurality of robot controllers is operably coupled with the one or more processors ([0051], disclosing control of a first robot may be provided to a client running on the primary client device 110 for control by the user 105, control of a second robot may be provided to a client running the a user-controlled client device 115 for control by a user 120, and control of a third robot may be provided to a client running on a machine-controlled client device 125 for control by an autonomous robot control program running on the machine-controlled client device. [0055], disclosing multiple “players” (the clients, users, or autonomous robot control programs) control the robots within a sandbox-type virtual environment during the robot simulation session. [0036], disclosing robot generated by the robot engine 215 comprises a robot being simulated within a robot simulation session for control by clients running on one or more of the primary client device. Therefore, multiple robots are simulated and controlled by their respective primary client device and client devices are external to virtual environment. Furthermore, [0024], disclosing robot simulation server 130 can enable a user to test autonomous robot control program used to control a real-world robot but within a virtual environment. Therefore, autonomous robot control program is integral with a real-world robot, that real world robot is external to virtual environment and necessarily has processors it is coupled to); providing, to each robot controller of the plurality of robot controllers, sensor data that is generated from a perspective of the respective robot avatar of the plurality of robot avatars that is controlled by the robot controller ([0018], disclosing client device 125 that receives data from one or more virtual sensors of a virtual robot representative of a context, state, and other characteristics of the virtual robot within the virtual environment), wherein the sensor data provided to each robot controller is perceived by the robot controller as real-world sensor data ([0029], disclosing virtual environment is a realistic (e.g., photo-realistic, spatial-realistic, sensor-realistic, etc.) representation of a real-world location, enabling a user (such as the user 105) to simulate the structure and behavior of a robot in a context that approximates reality. Therefore, sensor data is perceived by robot controller as real-world sensor data); receiving, from each robot controller of the plurality of robot controllers, joint commands that cause actuation of one or more joints of the respective robot avatar of the plurality of robot avatars that is controlled by the robot controller ([0018], disclosing client device 125 that receives data from one or more virtual sensors of a virtual robot representative of a context, state, and other characteristics of the virtual robot within the virtual environment and provides movement or behavior instructions for the virtual robot based on the received data. [0051], disclosing session engine 225 selects a client to control each robot during the robot simulation session, enabling data detected by the virtual sensors of each robot to be provided for display or presentation to a user of a corresponding client or an autonomous robot control program (such as camera data, audio data, lidar data, joint angle data, and the like), and enabling inputs provided by the users of the clients (or the autonomous robot control program) to be provided to the session engine 225 in order to control the movement and behavior of the robots); and actuating one or more joints of each robot avatar of the plurality of robot avatars pursuant to corresponding joint commands, wherein the actuating causes two or more of the plurality of robot avatars to act upon the interactive object in the three-dimensional virtual environment, wherein the actuating comprises operating a robot avatar that is controlled by the given robot controller at a frequency that corresponds to a real-world frequency of the respective robot controller environment ([0018], disclosing client device 125 that receives data from one or more virtual sensors of a virtual robot representative of a context, state, and other characteristics of the virtual robot within the virtual environment and provides movement or behavior instructions for the virtual robot based on the received data. [0051], disclosing session engine 225 selects a client to control each robot during the robot simulation session, enabling data detected by the virtual sensors of each robot to be provided for display or presentation to a user of a corresponding client or an autonomous robot control program (such as camera data, audio data, lidar data, joint angle data, and the like), and enabling inputs provided by the users of the clients (or the autonomous robot control program) to be provided to the session engine 225 in order to control the movement and behavior of the robots. [0038], disclosing generated robot can include or more end-effector components configured to perform one or more functions with regards to the virtual environment (such as a scoop component configured to scoop dirt and sand, a drill component configured to break up materials within the virtual environment, a gripper or claw component, and the like) (collectively, “effector components” or “effectors” hereinafter. As a real-world robot is simulated and robot control program intended for real world robot is tested, the operating frequency of robot control program, simulated robot and robot controller will be same as intended for real-world application). Claims 11 and 17 recite limitations similar in scope to claim 1, hence are similarly rejected. For claim 9, Shah teaches: The method of claim 1, further comprising: operating a simulated world clock of the three-dimensional virtual environment at a given frequency ([0029], disclosing “virtual environment” refers to a computer-rendered representation of reality. The virtual environment is a realistic (e.g., photo-realistic, spatial-realistic, sensor-realistic, etc.) representation of a real-world location, enabling a user (such as the user 105) to simulate the structure and behavior of a robot in a context that approximates reality. [0038], disclosing generated robot can include or more end-effector components configured to perform one or more functions with regards to the virtual environment. As virtual environment actuates robots as per control command of robot control program and provides a realistic representation, it necessarily has to operate at a frequency equal to or higher than highest frequency achievable by any of the plurality of robot controllers. This is essential to simulate real world sensor data and actuator control); and operating a first robot avatar of the plurality of robot avatars at a first frequency that is less than the given frequency ([0023], disclosing “robot” can refer to a robot in a traditional sense (e.g., a mobile or stationary robotic entity configured to perform one or more functions), and can also refer to any system or vehicle that can be autonomously and/or remotely controlled, or that executes autonomous control logic. The robot simulation server 130 can instantiate robots, automobiles (such as autonomously or manually controlled cars and trucks), construction equipment (such as autonomously or manually controlled bulldozers, excavators, and other tractors), delivery robots and vehicles, manufacturing robots and articulated arms, warehouse robots, logistics robots, drones and other aerial systems and vehicles, boats, motorcycles, scooters, spaceships and space robots. As there are multiple robots, processing capability and necessity of each will be different, and each robot will be operated at its frequency that may be equal to or less than frequency of simulated world clock). For claim 10, Shah teaches: The method of claim 1, further comprising pausing a simulated world clock until a robot avatar of the plurality of robot avatars advances to a next stage of operation ([0066], disclosing he simulation monitor engine 230 can enable the user to stop or pause the simulation, can enable the user to re-assign control of one or more robots to a different entity (e.g., a different user 120 or a different machine-controlled client device 125), can enable the user to assume control of a robot within the virtual environment, can enable the user to record monitored information, can enable the user to make changes to the virtual environment, can enable the user to replay or re-simulation one or more portions of the robot simulation session (described below in greater detail), can enable the user to make a change to one or more robots during the robot simulation session (described below in greater detail), and the like. Simulation clock is paused until a change to robot is made i.e. advanced to next stage of test operation). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-17 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12 of U.S. Patent No. 11813748. Although the claims at issue are not identical, they are not patentably distinct from each other because: Claim of instant application 18936675 Claim of US Patent 11813748 1. A method implemented using one or more processors, comprising: simulating a three-dimensional virtual environment that includes an interactive object, wherein the three-dimensional virtual environment includes a plurality of robot avatars that are controlled by a corresponding plurality of robot controllers that are external from the three-dimensional virtual environment, wherein a given robot controller of the plurality of robot controllers is operably coupled with the one or more processors; providing, to each robot controller of the plurality of robot controllers, sensor data that is generated from a perspective of the respective robot avatar of the plurality of robot avatars that is controlled by the robot controller; receiving, from each robot controller of the plurality of robot controllers, joint commands that cause actuation of one or more joints of the respective robot avatar of the plurality of robot avatars that is controlled by the robot controller; and actuating one or more joints of each robot avatar of the plurality of robot avatars pursuant to corresponding joint commands, wherein the actuating causes two or more of the plurality of robot avatars to act upon the interactive object in the three-dimensional virtual environment, wherein the actuating comprises operating a robot avatar that is controlled by the given robot controller at a frequency that corresponds to a real-world frequency of the respective robot controller. 1. A method implemented using one or more processors, comprising: simulating a three-dimensional virtual environment that includes an interactive object, wherein the virtual environment includes a plurality of robot avatars that are controlled independently and contemporaneously by a corresponding plurality of robot controllers that are integral with a plurality of physical robots that are external from the virtual environment and operably coupled with one or more of the processors, wherein the simulating includes generating, for each robot avatar of the plurality of robot avatars, a directed acyclic graph with nodes that represent joints or sensors of the robot avatar; providing, to each robot controller of the plurality of robot controllers, sensor data that is generated from a perspective of the respective robot avatar of the plurality of robot avatars that is controlled by the robot controller, wherein the sensor data provided to each robot controller is perceived by the robot controller as real-world sensor data; receiving, from each robot controller of the plurality of robot controllers, joint commands that cause actuation of one or more joints of the respective robot avatar of the plurality of robot avatars that is controlled by the robot controller; and actuating one or more joints of each robot avatar of the plurality of robot avatars pursuant to corresponding joint commands, wherein the actuating causes two or more of the plurality of robot avatars to act upon the interactive object in the virtual environment. 2. The method of claim 1, wherein the sensor data provided to the given robot controller is injected into a sensor data channel between one or more real-world sensors of a physical robot and one or more processors of the robot controller that is integral with the physical robot. 3. The method of claim 1, wherein the joint commands received from the given robot controller are intercepted from a joint command channel between one or more processors of the robot controller and one or more joints of a physical robot. 4. The method of claim 1, further comprising generating, for each robot avatar of the plurality of robot avatars, a directed acyclic graph with nodes that represent components of the robot avatar. 1…wherein the simulating includes generating, for each robot avatar of the plurality of robot avatars, a directed acyclic graph with nodes that represent joints or sensors of the robot avatar 5. The method of claim 4, wherein the directed acyclic graph is a dependency graph in which edges between nodes represent dependencies between the components represented by the nodes. 2. The method of claim 1, wherein the directed acyclic graph is a dependency graph in which edges between nodes represent dependencies between the joints or sensors represented by the nodes. 6. The method of claim 5, wherein at least one node representing a simulated sensor of the robot avatar imposes a delay on output of the sensor being passed up the directed acyclic graph, wherein the delay corresponds to a frequency of a real-world sensor corresponding to the simulated sensor. 3. The method of claim 2, wherein at least one node representing a simulated sensor of the robot avatar imposes a delay on output of the sensor being passed up the directed acyclic graph, wherein the delay corresponds temporally to an operational frequency of a real-world sensor corresponding to the simulated sensor. 7. The method of claim 5, wherein one or more nodes of the directed acyclic graph represent a simulated environmental condition of the three-dimensional virtual environment. 4. The method of claim 2, wherein one or more nodes of the directed acyclic graph represent a simulated environmental condition of the virtual environment. 8. The method of claim 5, wherein one or more nodes of the directed acyclic graph represent a simulated condition of a simulated sensor of the robot avatar. 5.The method of claim 2, wherein one or more nodes of the directed acyclic graph represent a simulated condition of a simulated sensor of the robot avatar. 9. The method of claim 1, further comprising: operating a simulated world clock of the three-dimensional virtual environment at a given frequency; and operating a first robot avatar of the plurality of robot avatars at a first frequency that is less than the given frequency. 7. The method of claim 6, further comprising: operating a simulated world clock of the virtual environment at a given frequency that is greater than or equal to a highest frequency achievable by any of the plurality of robot controllers; and operating a first robot avatar of the plurality of robot avatars at a first frequency that is less than the given frequency. 10. The method of claim 1, further comprising pausing a simulated world clock until a robot avatar of the plurality of robot avatars advances to a next stage of operation. 8. The method of claim 9, further comprises pausing a simulated world clock until a robot avatar of the plurality of robot avatars advances to a next stage of operation. Claims 11-17 recite limitations similar in scope to claims 1-10 hence are similarly rejected. Allowable Subject Matter Claim 2, 3, 4, 5, 6, 7,8, 12, 13, 14, 15 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Furthermore, rewriting claims 4, 5, 6, 7, 8, 14, 15 and 16 in independent form requires overcoming double patenting rejection to be in condition for allowance. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARSLAN AZHAR whose telephone number is (571)270-1703. The examiner can normally be reached Mon-Fri 7:30 - 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARSLAN AZHAR/Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Nov 04, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §102, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589495
METHOD FOR DETERMINING POSE OF ROBOT, ROBOT AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12589500
METHOD AND DEVICE FOR ANNOTATING IMAGES OF AN OBJECT CAPTURED USING A CAMERA
2y 5m to grant Granted Mar 31, 2026
Patent 12589497
Monitoring System and Method for Operating the System
2y 5m to grant Granted Mar 31, 2026
Patent 12583111
Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation
2y 5m to grant Granted Mar 24, 2026
Patent 12576514
ROBOT CONTROL METHOD, ROBOT, AND CONTROL TERMINAL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
98%
With Interview (+20.8%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 187 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month