Prosecution Insights
Last updated: April 19, 2026
Application No. 17/846,243

SYSTEMS, COMPUTER PROGRAM PRODUCTS, AND METHODS FOR BUILDING SIMULATED WORLDS

Non-Final OA §101§102§103§112§DP
Filed
Jun 22, 2022
Examiner
WHITE, JAY MICHAEL
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Sanctuary Cognitive Systems Corporation
OA Round
1 (Non-Final)
12%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
1 granted / 8 resolved
-42.5% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
30.3%
-9.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-10, 12-16, 18, and 20 are rejected for nonstatutory double patenting. Claims 1-20 are rejected under 35 USC 112. Claims 1-20 are rejected under 35 USC 101. Claims --1, 7-9, 10, 12, 15-16, 18, and 20 are rejected under 35 USC 102 over Reddy. Claims 2-6 and 13-14 are rejected under 35 USC 103 over Reddy and Rosenfeld. Claims 11, 17, and 19 are rejected under 35 USC 103 over Reddy and Caren. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-10, 12-16, 18, and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims of 1, 5-10, 12, 15-17, and 20 (See mapping below) of copending Application No. 17/846,262 (‘262) in view of NPL: “” by Rosenfeld et al. (Rosenfeld). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the robot and tele-operation systems of ‘262 by the GUI of Rosenfeld because a person of ordinary skill in the art would be motivated based on the exchange of data and update of the simulation environment in the robot-human collaborative environment of the independent claims of ‘262 to look to Rosenfeld for human-robot team collaboration that significantly improve performance. (‘262 Claim 1 “1. A method of updating, by a robot system including a robot body, a simulation of an external environment of the robot body, the method comprising: loading a simulation of an external environment of the robot body; providing data collected by at least one sensor on-board the robot body to a tele- operation system that is physically remote from the robot body; receiving simulation instructions from the tele-operation system; and updating the simulation of the external environment based on the simulation instructions.; Rosenfeld Abstract “In this article we propose a novel approach for utilizing automated advising agents in assisting an operator to better manage a team of multiple robots in complex environments. We introduce an advice provision methodology and exemplify its implementation using automated advising agents in two real-world human–multi-robot team collaboration tasks: the Search And Rescue (SAR) and the warehouse operation tasks. Our intelligent advising agents were evaluated through extensive field trials, with over 150 human operators using both simulated and physical mobile robots, and showed a significant improvement in the team’s performance.”) This is a provisional nonstatutory double patenting rejection. Claim 17/846,243 17/846,262 1 1. A method of updating a simulation of an external environment of an agent by a tele-operation system, the method comprising: 1. A method of updating, by a robot system including a robot body, a simulation of an external environment of the robot body, the method comprising: displaying a simulation of the external environment of the agent to at least one user of the tele-operation system, the at least one user being physically remote from the agent; Rosenfeld Page 220, 4.1.4. Results "We further analyze the subject’s acceptance rate of the agent’s advice. Advice is considered acceptedif it is performed by the subject while it is displayed on the GUI." See Rosenfeld Figs. 1-2 on Page 217 receiving data collected by at least one sensor of the agent; 1. […] providing data collected by at least one sensor on-board the robot body to a tele- operation system that is physically remote from the robot body; […] providing the data collected by at least one sensor of the agent to the at least one user of the tele-operation system; Rosenfeld Page 217, Second Paragraph "A is the action set that the operator can take. Recall that A is also the advice set from which the agent can propose advice. We define Aas the instantiations of the following 7 action schemes: “Robot iis waiting for your command”, “Unload item xat station y.”, “Complete the packing of order z.”, “Clear an obstacle from the floor.”, “Obstacle was detected – restrict its cell.”,“Clear a critical obstacle from the floor.”, and “Critical obstacle was detected – restrict its cell.”" - The user is informed if an obstacle is detected/sensed.) receiving simulation instructions from the at least one user of the tele-operation system, the simulation instructions based at least in part on the data collected by at least one sensor of the agent; 1. receiving simulation instructions from the tele-operation system; and updating the simulation of the external environment based on the simulation instructions; and 1. updating the simulation of the external environment based on the simulation instructions. displaying the updated simulation of the external environment to the at least one user of the tele-operation system. Rosenfeld Page 220, 4.1.4. Results "We further analyze the subject’s acceptance rate of the agent’s advice. Advice is considered acceptedif it is performed by the subject while it is displayed on the GUI." See Rosenfeld Figs. 1-2 on Page 217 2 2. The method of claim 1 wherein receiving simulation instructions from the at least one user of the tele-operation system includes receiving instructions that describe a modification to the simulation of the external environment. 6. The method of claim 1 wherein receiving simulation instructions from the tele-operation system includes receiving instructions that describe a modification to the simulation of the external environment. 3 3. The method of claim 2 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. 7. The method of claim 6 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. 4 4. The method of claim 2 wherein receiving instructions that describe a modification to the simulation of the external environment includes receiving instructions that describe a modification to at least one object representation in the simulation of the external environment. 8. The method of claim 6 wherein receiving instructions that describe a modification to the simulation of the external environment includes receiving instructions that describe a modification to at least one object representation in the simulation of the external environment. 5 5. The method of claim 4 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 9. The method of claim 8 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 6 6. The method of claim 1 wherein receiving simulation instructions from the at least one user of the tele-operation system includes receiving instructions that describe a new object representation for the simulation of the external environment, and wherein updating the simulation of the external environment based on the simulation instructions includes applying the simulation instructions to add the new object representation to the simulation of the external environment, the new object representation corresponding to a real-world counterpart in the external environment characterized, at least in part, by the data collected by at least one sensor of the agent. 10. The method of claim 1 wherein receiving simulation instructions from the tele-operation system includes receiving instructions that describe a new object representation for the simulation of the external environment, and wherein updating the simulation of the external environment based on the simulation instructions includes applying the simulation instructions to add the new object representation to the simulation of the external environment, the new object representation corresponding to a real-world counterpart in the external environment characterized, at least in part, by the data collected by at least one sensor on- board the robot body. 7 7. The method of claim 1, further comprising:providing additional data collected by at least one sensor of the agent to the at least one user of the tele-operation system; receiving additional simulation instructions from the at least one user of the tele- operation system; re-updating the simulation of the external environment based on the additional simulation instructions; and displaying the re-updated simulation of the external environment to the at least one user of the tele-operation system. This is just a repetition of steps of claim 1 with new data, which is rejected for the same reasons as claim 1. 8 8. The method of claim 1 wherein the agent includes a robot system including a robot body and the at least one sensor of the agent includes at least one image sensor on-board the robot body, and wherein: displaying a simulation of the external environment of the agent to at least one user of the tele-operation system includes displaying a simulation of the external environment of the robot body to at least one user of the tele-operation system; receiving data collected by at least one sensor of the agent includes receiving data collected by at least one image sensor of the robot body; and providing the data collected by at least one sensor of the agent to the at least one user of the tele-operation system includes providing the data collected by at least one image sensor of the robot body to the at least one user of the tele-operation system. 1. […] providing data collected by at least one sensor on-board the robot body to a tele- operation system that is physically remote from the robot body; Rosenfeld See Fig. 9 on Page 223; Page 229, Fifth Paragraph "The GUI (see Fig.2) provides the operator with on-line feedback from the cameras mounted on the robots (Thumbnail area), the 2D map of the terrain including the robots’ reported positions and their footprints (area 1), an enlarged camera view of the robot of interest (area 2), an action control bar, and a joystick widget. The action bar’s commands and joystick functions are also available using keyboard and mouse shortcuts inspired by strategic computer games. For example, in order to set interest on a specific robot, the operator could click its thumbnail camera or location on the map or could click on its number on the keyboard. Double clicking will center the map on the robot’s location." 9 9. The method of claim 8, further comprising:training the robot system to autonomously update the simulation of the external environment based on multiple iterations of: [The method of claim 1] 5. The method of claim 1, further comprising: training the robot system to autonomously update the simulation of the external environment based on multiple iterations of: [The rejection of the steps is the same as in claim 1, including Rosenfeld] 10 10. The method of claim 8 wherein the at least one user of the tele-operation system includes at least one tele-operator of the robot system. 1. […] providing data collected by at least one sensor on-board the robot body to a tele- operation system that is physically remote from the robot body; Rosenfeld See Fig. 9 on Page 223; Page 229, Fifth Paragraph "The GUI (see Fig.2) provides the operator with on-line feedback from the cameras mounted on the robots (Thumbnail area), the 2D map of the terrain including the robots’ reported positions and their footprints (area 1), an enlarged camera view of the robot of interest (area 2), an action control bar, and a joystick widget. The action bar’s commands and joystick functions are also available using keyboard and mouse shortcuts inspired by strategic computer games. For example, in order to set interest on a specific robot, the operator could click its thumbnail camera or location on the map or could click on its number on the keyboard. Double clicking will center the map on the robot’s location." 11 Not Rejected 12 12. A tele-operation system comprising: at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to: [Perform operations of claim 1] 12. A robot system comprising: […] at least one processor; and […] at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to: [Perform operations rejected under claim 1 using Rosenfeld] 13 13. The tele-operation system of claim 12 wherein the processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to receive simulation instructions from the at least one user of the tele-operation system, cause the tele-operation system to receive instructions that describe a modification to the simulation of the external environment, and wherein the processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to update the simulation of the external environment based on the simulation instructions, cause the tele- operation system to apply the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. 16. The robot system of claim 12 wherein the simulation instructions received from the tele-operation system describe a modification to the simulation of the external environment, and wherein the data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to update the simulation of the external environment based on the simulation instructions, cause the robot system to apply the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. 14 14. The tele-operation system of claim 12 wherein the processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to receive simulation instructions from the at least one user of the tele-operation system, cause the tele-operation system to receive instructions that describe a modification to at least one object representation in the simulation of the external environment, and wherein the processor- executable instructions that, when executed by the at least one processor, cause the tele- operation system to update the simulation of the external environment based on the simulation instructions, cause the tele-operation system to apply the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 17. The robot system of claim 12 wherein the simulation instructions received from the tele-operation system describe a modification to at least one object representation in the simulation of the external environment, and wherein the data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to update the simulation of the external environment based on the simulation instructions, cause the robot system to apply the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 15 15. The tele-operation system of claim 12 wherein the agent includes a robot system including a robot body, and the at least one sensor of the agent includes at least one image sensor on-board the robot body. 1. […] providing data collected by at least one sensor on-board the robot body to a tele- operation system that is physically remote from the robot body; Rosenfeld See Fig. 9 on Page 223; Page 229, Fifth Paragraph "The GUI (see Fig.2) provides the operator with on-line feedback from the cameras mounted on the robots (Thumbnail area), the 2D map of the terrain including the robots’ reported positions and their footprints (area 1), an enlarged camera view of the robot of interest (area 2), an action control bar, and a joystick widget. The action bar’s commands and joystick functions are also available using keyboard and mouse shortcuts inspired by strategic computer games. For example, in order to set interest on a specific robot, the operator could click its thumbnail camera or location on the map or could click on its number on the keyboard. Double clicking will center the map on the robot’s location." 16 16. The tele-operation system of claim 15, further comprising: data and/or processor-executable instructions stored in the non-transitory processor-readable storage medium that, when executed by the at least one processor, cause the tele-operation system to: train the robot system to autonomously update to the simulation of the external environment based on multiple iterations of:receiving simulation instructions from the at least one user of the tele-operation system based at least in part on the data collected by at least one image sensor of the robot body; and updating the simulation of the external environment based on the simulation instructions. 15. The robot system of claim 12, further comprising: data and/or processor-executable instructions stored in the at least one non- transitory processor-readable storage medium that, when executed by the at least one processor, cause the robot system to: train the robot system to autonomously update the simulation of the external environment based on multiple iterations of: receiving simulation instructions from the tele-operation system; and updating the simulation of the external environment based on the simulation instructions. 17 Not Rejected 18 18. A computer program product comprising data and/or processor- executable instructions stored in a non-transitory processor-readable storage medium, the data and/or processor-executable instructions which, when the non-transitory processor-readable storage medium is communicatively coupled to at least one processor of a tele-operation system and the at least one processor executes the data and/or processor-executable instructions, cause the tele-operation system to: [Perform operations of claim 1] 20. A computer program product comprising data and/or processor- executable instructions stored in a non-transitory processor-readable storage medium, the data and/or processor-executable instructions which, when the non-transitory processor-readable storage medium is communicatively coupled to at least one processor of a robot system and the at least one processor executes the data and/or processor-executable instructions, cause the robot system to: [See rejection of claim 1 operations with Rosenfeld] 19 Not Rejected 20 20. The computer program product of claim 18, wherein the agent includes a robot system including a robot body and the at least one sensor of the agent includes at least one image sensor on-board the robot body, and further comprising: data and/or processor-executable instructions which, when the non-transitory processor-readable storage medium is communicatively coupled to at least one processor of a tele-operation system and the at least one processor executes the data and/or processor- executable instructions, cause the tele-operation system to: train the robot system to autonomously update the simulation of the external environment based on multiple iterations of: 20. A computer program product comprising data and/or processor- executable instructions stored in a non-transitory processor-readable storage medium, the data and/or processor-executable instructions which, when the non-transitory processor-readable storage medium is communicatively coupled to at least one processor of a robot system and the at least one processor executes the data and/or processor-executable instructions, cause the robot system to: [Execute operations of claim 1]15. The robot system of claim 12, further comprising: data and/or processor-executable instructions stored in the at least one non- transitory processor-readable storage medium that, when executed by the at least one processor, cause the robot system to: train the robot system to autonomously update the simulation of the external environment based on multiple iterations of: providing data collected by at least one sensor of the robot body to a tele- operation system that is physically remote from the robot body; receiving simulation instructions from the tele-operation system; and updating the simulation of the external environment based on the simulation instructions. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Physically Remote The term “physically remote” in independent claims 1, 12, and 18 is a relative term which renders the claim indefinite. The term “physically remote” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The dependent claims that depend from the rejected claims are rejected at least based on their dependency. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Software Per Se Claims 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims are directed to software per se. While the claims recite a non-transitory medium, the claims are directed to the data stored thereon and do not positively recite the medium as an element of the claim. Accordingly, the claim, as drafted, is only directed to software… per se. Subject Matter Eligibility Claims 1-20 are rejected under 35 U.S.C. 101 as ineligible subject matter. Independent Claims Claim 12 (Statutory Category – Machine) Step 2A – Prong 1: Judicial Exception Recited? Yes, the claims recite a mental process. Claim 12 recites: update the simulation of the external environment based on the simulation instructions; and (Evaluation/Mental Process – Updating a simulation (e.g., modifying an image) can practically be performed in the mind or with aid of pen, paper, and/or a calculator.) Claim 12 recites a mental process, which is an abstract idea. Claim 12 recites an abstract idea. Step 2A – Prong 2: Integrated into a Practical Solution? No. The additional limitations: at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to: […] teleoperation system […] […] sensor […] […] agent […] These are generic computing elements recited at a high level of generality, and, under MPEP 2106.05(f), fail to integrate the abstract idea into a practical application. […] receive data collected by at least one sensor of the agent; provide the data collected by at least one sensor of the agent to the at least one user of the tele-operation system; receive simulation instructions from the at least one user of the tele-operation system based at least in part on the data collected by at least one sensor of the agent; […] These steps are mere data gathering similar to the MPEP 2106.05(g) examples: “e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent” “iv. Obtaining information about transactions using the Internet to verify credit card transactions,” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display,” “v. Consulting and updating an activity log, Ultramercial.” Mere data gather is insignificant extra-solution activity and, under MPEP 2106.05(g), fails to integrate the abstract idea into a practical application. […] display a simulation of an external environment of an agent to at least one user of the tele-operation system, the at least one user being physically remote from the agent; […] display the updated simulation of the external environment to the at least one user of the tele-operation system. Display of data is insignificant extra-solution activity similar to the MPEP 2106.05(g) example: “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display,” “ii. Printing or downloading generated menus.” These steps are insignificant extra-solution activity and, under MPEP 2106.05(g), fail to integrate the abstract idea into a practical application. Any recited data merely limits the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fails to integrate the abstract idea into a practical application. None of the additional limitations of claim 12, whether in isolation or combination, integrate the abstract idea into a practical application. Accordingly, claim 12 is directed to the abstract idea. Step 2B: Claim provides an Inventive Concept? No. The additional limitations: at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to: […] teleoperation system […] […] sensor […] […] agent […] These are generic computing elements recited at a high level of generality, and, under MPEP 2106.05(f), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. […] receive data collected by at least one sensor of the agent; provide the data collected by at least one sensor of the agent to the at least one user of the tele-operation system; receive simulation instructions from the at least one user of the tele-operation system based at least in part on the data collected by at least one sensor of the agent; […] These steps are well-understood, routine, conventional (WURC) activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “i. Determining the level of a biomarker in blood by any means” “vi. Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price.” Because these steps are WURC and, as previously demonstrated, insignificant extra-solution activity, under MPEP 2106.05(d) and 2106.05(g), the steps to fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. […] display a simulation of an external environment of an agent to at least one user of the tele-operation system, the at least one user being physically remote from the agent; […] display the updated simulation of the external environment to the at least one user of the tele-operation system. Display of data is WURC similar to the MPEP 2106.05(d) examples: “iv. Storing and retrieving information in memory “ “iv. Presenting offers and gathering statistics.” Because these steps are WURC and, as previously demonstrated, insignificant extra-solution activity, under MPEP 2106.05(d) and 2106.05(g), the steps to fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. Any recited data merely limits the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fails to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. None of the additional limitations of claim 12, whether in isolation or combination, combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. Claim 12 is ineligible. Claim 1 (Statutory Category – Process) Regarding claim 1, claim 1 recites the method executed by the configuration of the system of claim 12 and is rejected for the same reasons as claim 12. Claim 12 is ineligible. Claim 18 (Statutory Category – None, Software Per Se) Regarding claim 18, claims 18-20 are software per se and do not belong to one of the four categories. However, in the interest of compact prosecution, these claims will be addressed for eligibility as if the Applicant has amended the claims to positively recite the non-transitory CRM as an element. Claim 18 is the software and likely intended to be the CRM of claim 12, so claim 18 is rejected for the same reasons as claim 12. Claim 18 is ineligible. Dependent Claims The dependent claims are also ineligible for the following reasons. Note that elements recognized as generic computing elements in the independent claims fail to confer eligibility under MPEP 2106.05(f) for the same reasons in the dependent claims. Similarly, the data description specific to the technological environment merely restrict the abstract idea to a particular technological environment and, under MPEP 2106.05(h), fail to confer eligibility. Claim 2 wherein receiving simulation instructions from the at least one user of the tele-operation system includes receiving instructions that describe a modification to the simulation of the external environment. This is mere data gathering and WURC and fails to confer eligibility for the same reasons as the receiving steps of the independent claims. Claim 2 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 2 is ineligible Claim 3 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. This is an evaluation that is an element of the updating, and is an element of the abstract idea for the same reasons as the updating step in the independent claims. Claim 3 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 3 is ineligible Claim 4 wherein receiving instructions that describe a modification to the simulation of the external environment includes receiving instructions that describe a modification to at least one object representation in the simulation of the external environment. This describes the nature of the data gathered and merely limits the abstract idea to a particular technological field, which, under MPEP 2106.05(h), fails to confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 4 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 4 is ineligible Claim 5 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. This is an evaluation that is an element of the updating, and is an element of the abstract idea for the same reasons as the updating step in the independent claims. Claim 5 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 5 is ineligible Claim 6 wherein receiving simulation instructions from the at least one user of the tele-operation system includes receiving instructions that describe a new object representation for the simulation of the external environment, and This is an element of the mere data gathering and fails to confer eligibility for the same reasons as the receiving steps of the independent claim. Also, the nature of the data merely limits the abstract idea to a particular technological field and, under MPEP 2106.05(h), fails to confer eligibility at Step 2A, Prong 2 and Step 2B. wherein updating the simulation of the external environment based on the simulation instructions includes applying the simulation instructions to add the new object representation to the simulation of the external environment, the new object representation corresponding to a real-world counterpart in the external environment characterized, at least in part, by the data collected by at least one sensor of the agent. This is an evaluation that is an element of the updating, and is an element of the abstract idea for the same reasons as the updating step in the independent claims. Claim 6 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 6 is ineligible Claim 7 providing additional data collected by at least one sensor of the agent to the at least one user of the tele-operation system; receiving additional simulation instructions from the at least one user of the tele- operation system; re-updating the simulation of the external environment based on the additional simulation instructions; and displaying the re-updated simulation of the external environment to the at least one user of the tele-operation system. This is just a repetition of steps of the independent claims and is rejected for the same reasons as the steps in the prior iteration expressed in the independent claims. Claim 7 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 7 is ineligible Claim 8 wherein the agent includes a robot system including a robot body and the at least one sensor of the agent includes at least one image sensor on-board the robot body, and wherein: A robot agent with a robot body is a computing element recited at a high level of generality, so under MPEP 2106.05(f), it fails to confer eligibility at Step 2A, Prong 2 and Step 2B. displaying a simulation of the external environment of the agent to at least one user of the tele-operation system includes displaying a simulation of the external environment of the robot body to at least one user of the tele-operation system; This fails to confer eligibility for the same reasons as the display steps of the independent claims. receiving data collected by at least one sensor of the agent includes receiving data collected by at least one image sensor of the robot body; and This merely qualifies the data gathered and is insignificant extra-solution activity and WURC for the same reasons as the receiving steps of the independent claims. Accordingly, this fails to confer eligibility at Step 2A, Prong 2 and Step 2B. providing the data collected by at least one sensor of the agent to the at least one user of the tele-operation system includes providing the data collected by at least one image sensor of the robot body to the at least one user of the tele-operation system. This fails to confer eligibility for the same reasons as the providing step in the independent claims. Claim 8 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 8 is ineligible Claim 9 further comprising: training the robot system to autonomously update the simulation of the external environment based on multiple iterations of: the receiving data collected by at least one image sensor of the robot body; the providing the data collected by at least one image sensor of the robot body to the at least one user of the tele-operation system the receiving simulation instructions from the at least one user of the tele- operation system based at least in part on the data collected by at least one image sensor of the robot body; and the updating the simulation of the external environment based on the simulation instructions. This merely recites at a high level of generality that a generic AI training is performed based on the steps of the independent claims. Accordingly, this fails to confer eligibility as an apply it step on a generic computing element under MPEP 2106.05(f). Also, the steps are mere repeats of the steps in the independent claims and are rejected for the same reasons as those steps. Claim 9 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 9 is ineligible Claim 10 wherein the at least one user of the tele-operation system includes at least one tele-operator of the robot system. This merely limits the abstract idea to a particular field of technology and so, under MPEP 2106.05(h), fails to confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 10 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 10 is ineligible Claims 11, 17, and 19 wherein the at least one user of the tele-operation system includes a plurality of tele-artists and the tele-operation system enables the plurality of tele-artists to concurrently update the simulation. This merely limits the abstract idea to a particular field of technology and so, under MPEP 2106.05(h), fails to confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 11 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claims 17 and 19 recite substantially the same features as claim 11. Claims 11, 17, and 19 are ineligible Claim 13 Claim 13 recites substantially the same features as claims 2 and 3 combined and is rejected for the same reasons as claims 2 and 3. Claim 13 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 13 is ineligible Claim 14 Claim 14 recites substantially the same features as claims 4 and 5 combined and is rejected for the same reasons as claims 4 and 5. Claim 14 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 14 is ineligible Claim 15 wherein the agent includes a robot system including a robot body, and the at least one sensor of the agent includes at least one image sensor on-board the robot body. This merely recites at a high level of generality the use of a robot with sensors, including an image sensor (e.g., a camera). Accordingly, this fails to confer eligibility as an apply it step on a generic computing element under MPEP 2106.05(f). This merely limits the abstract idea to a particular field of technology and so, under MPEP 2106.05(h), fails to confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 15 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 15 is ineligible Claims 16 and 20 train the robot system to autonomously update to the simulation of the external environment based on multiple iterations of: receiving simulation instructions from the at least one user of the tele-operation system based at least in part on the data collected by at least one image sensor of the robot body; and updating the simulation of the external environment based on the simulation instructions. This represents a generic computing operation of training a robot system to autonomously update an environment recited at a high level, especially because there is no recited link between the repeated iterations of the steps and how they affect the training. Also, the repeated steps are fail to confer eligibility for the same reasons as the corresponding steps in the independent claims. Claim 16 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 20 recites features similar to the features of claim 16 and fails to confer eligibility for at least the same reasons. Claims 16 and 20 are ineligible Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 7-9, 10, 12, 15-16, 18, and 20: Reddy Claim(s) 1, 7-9, 10, 12, 15-16, 18, and 20 are rejected under 35 U.S.C. 102(a1)/(a2) as being anticipated by NPL: “Shared Autonomy via Deep Reinforcement Learning” by Reddy et al. (Reddy). Claims 1, 12, and 18 Regarding claim 12, Reddy teaches: 12. A tele-operation system comprising: at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to: (Reddy Page 2, II. Related Work “Robotic teleoperation. We build on shared autonomy work in which the system is initially unaware of the user’s goal [9, 7, 21, 13, 24, 16, 10] and explore problem statements with unknown dynamics, unknown user policy, and unknown goal representation.” See FIG. 1 (below) with a computer providing instructions to a robot.) PNG media_image1.png 538 628 media_image1.png Greyscale display a simulation of an external environment of an agent to at least one user of the tele-operation system, the at least one user being physically remote from the agent; (Reddy FIG. 1(a) illustrates the visualization produced from the PNG media_image2.png 315 326 media_image2.png Greyscale receive data collected by at least one sensor of the agent; (Reddy Page 3, Right Column, C. Incorporating User Control “Because we do not know dynamics in any of our problems of interest, we use a deep reinforcement learning agent which maps observations from its sensors to actions (or Q values for each action). – Data is collected from robot sensors.) provide the data collected by at least one sensor of the agent to the at least one user of the tele-operation system; (Reddy Page 3, Left Column, Last Paragraph “We merely assume the robot is informed when the user provides feedback (e.g., by pressing a button). In practice, the user might simply indicate once per trial whether the robot succeeded or not.” Page 7, Right Column, VII User Study With A Physical Robot “To evaluate our method in a more realistic environment, we formulate a “perching” task for a real human flying a real quadrotor: land the vehicle on a level, square landing pad at some distance from the initial take-off position, such that the drone’s first-person camera is pointed at a specific object in the drone’s surroundings, without flying out of bounds or running out of time. Perching a drone at an arbitrary vantage point enables it to be used as a mobile security camera for surveillance applications. Humans find it challenging to simultaneously point the camera at the desired scene and navigate to the precise location of a feasible landing pad under time constraints. An assistive copilot has little trouble navigating to and landing on the landing pad, but does not know where to point the camera because it does not know what the human wants to observe after landing. Together, the human can focus on pointing the camera and the copilot can focus on landing precisely on the landing pad.” – The camera provides visual data of the real environment to the user so that the user and robot can collaborate.) receive simulation instructions from the at least one user of the tele-operation system based at least in part on the data collected by at least one sensor of the agent; (Page 7, Right Column, VII User Study W
Read full office action

Prosecution Timeline

Jun 22, 2022
Application Filed
Oct 14, 2025
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
12%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month