Prosecution Insights
Last updated: April 19, 2026
Application No. 17/846,262

SYSTEMS, COMPUTER PROGRAM PRODUCTS, AND METHODS FOR BUILDING SIMULATED WORLDS

Non-Final OA §101§102§103§112§DP
Filed
Jun 22, 2022
Examiner
WHITE, JAY MICHAEL
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Sanctuary Cognitive Systems Corporation
OA Round
1 (Non-Final)
12%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
1 granted / 8 resolved
-42.5% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
30.3%
-9.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
24.2%
-15.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§101 §102 §103 §112 §DP
DETAILED ACTION This action is responsive to the claims filed on June 22, 2022. Claims 1-20 are under examination. Claims 1-3, 5-13, and 15-20 are provisionally rejected on the ground of nonstatutory double patenting over application 17/846,243. Claims 1-20 are rejected under 35 USC 112(b). Claims 1-20 are rejected under 35 USC 101. Claims 1-5, 11-15, and 19-20 are rejected under 35 USC 102 over Reddy. Claims 6-10 and 16-18 are rejected under 35 USC 103 over Reddy in view of Rosenfeld. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting Claims 1-3, 5-13, and 15-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1,2-7, 8-9, 12-14, 15-16, and 18 (see mapping below) of copending Application No. 17/846,243 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because they describe reciprocal operations. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim 17/846,262 17/846,243 1 1. A method of updating, by a robot system including a robot body, a simulation of an external environment of the robot body, the method comprising: 1. A method of updating a simulation of an external environment of an agent by a tele-operation system, the method comprising: 8. […] wherein the agent includes a robot system including a robot body and the at least one sensor of the agent includes at least one image sensor on-board the robot body, and wherein: loading a simulation of an external environment of the robot body; 1. […] displaying a simulation of the external environment of the agent to at least one user of the tele-operation system includes displaying a simulation of the external environment of the robot body to at least one user of the tele-operation system; (displaying implies it is loaded) providing data collected by at least one sensor on-board the robot body to a tele- operation system that is physically remote from the robot body; 1. […] the at least one user being physically remote from the agent; [...] receiving data collected by at least one sensor of the agent includes receiving data collected by at least one image sensor of the robot body; and (providing to something means that thing receives it) receiving simulation instructions from the tele-operation system; and 1. […] receiving simulation instructions from the at least one user of the tele-operation system, the simulation instructions based at least in part on the data collected by at least one sensor of the agent; updating the simulation of the external environment based on the simulation instructions. 1. […] updating the simulation of the external environment based on the simulation instructions; and 2 2. further comprising: training an artificial intelligence to autonomously update the simulation based at least in part on data collected by at least one sensor on-board the robot body. 9. […] The method of claim 8, further comprising: training the robot system to autonomously update the simulation of the external environment based on multiple iterations of: the receiving data collected by at least one image sensor of the robot body; 3 3. further comprising: storing the artificial intelligence in a non-transitory processor-readable storage memory on-board the robot body. 8. […] receiving data collected by at least one sensor of the agent includes receiving data collected by at least one image sensor of the robot body; and (in order to transmit data, the data must be stored. Here the system receives data from the robot, so the robot would have to have some form of non-transitory memory. 4 4. NOT REJECTED 5 5. further comprising: training the robot system to autonomously update the simulation of the external environment based on multiple iterations of: providing data collected by at least one sensor on-board the robot body to a tele-operation system that is physically remote from the robot body; receiving simulation instructions from the tele-operation system; and updating the simulation of the external environment based on the simulation instructions. 9. The method of claim 8, further comprising: training the robot system to autonomously update the simulation of the external environment based on multiple iterations of: the receiving data collected by at least one image sensor of the robot body; the providing the data collected by at least one image sensor of the robot body to the at least one user of the tele-operation system the receiving simulation instructions from the at least one user of the tele- operation system based at least in part on the data collected by at least one image sensor of the robot body; and the updating the simulation of the external environment based on the simulation instructions. 6 6. The method of claim 1 wherein receiving simulation instructions from the tele-operation system includes receiving instructions that describe a modification to the simulation of the external environment. 2. The method of claim 1 wherein receiving simulation instructions from the at least one user of the tele-operation system includes receiving instructions that describe a modification to the simulation of the external environment. 7 7. The method of claim 6 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. 3. The method of claim 2 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. 8 8. The method of claim 6 wherein receiving instructions that describe a modification to the simulation of the external environment includes receiving instructions that describe a modification to at least one object representation in the simulation of the external environment. 4. The method of claim 2 wherein receiving instructions that describe a modification to the simulation of the external environment includes receiving instructions that describe a modification to at least one object representation in the simulation of the external environment. 9 9. The method of claim 8 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 5. The method of claim 4 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 10 10. The method of claim 1 wherein receiving simulation instructions from the tele-operation system includes receiving instructions that describe a new object representation for the simulation of the external environment, and wherein updating the simulation of the external environment based on the simulation instructions includes applying the simulation instructions to add the new object representation to the simulation of the external environment, the new object representation corresponding to a real-world counterpart in the external environment characterized, at least in part, by the data collected by at least one sensor on- board the robot body. 6. The method of claim 1 wherein receiving simulation instructions from the at least one user of the tele-operation system includes receiving instructions that describe a new object representation for the simulation of the external environment, and wherein updating the simulation of the external environment based on the simulation instructions includes applying the simulation instructions to add the new object representation to the simulation of the external environment, the new object representation corresponding to a real-world counterpart in the external environment characterized, at least in part, by the data collected by at least one sensor of the agent. 11 11. The method of claim 1, further comprising: providing additional data collected by at least one sensor on-board the robot body to the tele-operation system; receiving additional simulation instructions from the tele-operation system; and re-updating the simulation of the external environment based on the additional simulation instructions. 7. The method of claim 1, further comprising: providing additional data collected by at least one sensor of the agent to the at least one user of the tele-operation system; receiving additional simulation instructions from the at least one user of the tele- operation system; re-updating the simulation of the external environment based on the additional simulation instructions; and 12 12. A robot system comprising: a robot body; at least one sensor carried by the robot body; at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to: [Method of Claim 1] 12. A tele-operation system comprising: at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to: [Method of Claim 1] 13 13. The robot system of claim 12, further comprising: data and/or processor-executable instructions stored in the at least one non- transitory processor-readable storage medium that, when executed by the at least one processor, cause the robot system to train an artificial intelligence to autonomously update the simulation based at least in part on data collected by at least one sensor of the robot body. 15. The tele-operation system of claim 12 wherein the agent includes a robot system including a robot body, and the at least one sensor of the agent includes at least one image sensor on-board the robot body.16. The tele-operation system of claim 15, further comprising: data and/or processor-executable instructions stored in the non-transitory processor-readable storage medium that, when executed by the at least one processor, cause the tele-operation system to: train the robot system to autonomously update to the simulation of the external environment based on multiple iterations of: receiving simulation instructions from the at least one user of the tele-operation system based at least in part on the data collected by at least one image sensor of the robot body; and [...] 14 14. Not Rejected 15 15. The robot system of claim 12, further comprising: data and/or processor-executable instructions stored in the at least one non- transitory processor-readable storage medium that, when executed by the at least one processor, cause the robot system to: train the robot system to autonomously update the simulation of the external environment based on multiple iterations of: providing data collected by at least one sensor of the robot body to a tele- operation system that is physically remote from the robot body; receiving simulation instructions from the tele-operation system; and updating the simulation of the external environment based on the simulation instructions. 1. […]providing data collected by at least one sensor on-board the robot body to a tele- operation system that is physically remote from the robot body; 16. The tele-operation system of claim 15, further comprising: data and/or processor-executable instructions stored in the non-transitory processor-readable storage medium that, when executed by the at least one processor, cause the tele-operation system to: train the robot system to autonomously update to the simulation of the external environment based on multiple iterations of: receiving simulation instructions from the at least one user of the tele-operation system based at least in part on the data collected by at least one image sensor of the robot body; and updating the simulation of the external environment based on the simulation instructions. 16 16. The robot system of claim 12 wherein the simulation instructions received from the tele-operation system describe a modification to the simulation of the external environment, and wherein the data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to update the simulation of the external environment based on the simulation instructions, cause the robot system to apply the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. 13. The tele-operation system of claim 12 wherein the processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to receive simulation instructions from the at least one user of the tele-operation system, cause the tele-operation system to receive instructions that describe a modification to the simulation of the external environment, and wherein the processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to update the simulation of the external environment based on the simulation instructions, cause the tele- operation system to apply the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. 17 17. The robot system of claim 12 wherein the simulation instructions received from the tele-operation system describe a modification to at least one object representation in the simulation of the external environment, and wherein the data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to update the simulation of the external environment based on the simulation instructions, cause the robot system to apply the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 14. The tele-operation system of claim 12 wherein the processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to receive simulation instructions from the at least one user of the tele-operation system, cause the tele-operation system to receive instructions that describe a modification to at least one object representation in the simulation of the external environment, and wherein the processor- executable instructions that, when executed by the at least one processor, cause the tele- operation system to update the simulation of the external environment based on the simulation instructions, cause the tele-operation system to apply the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 18 18. The robot system of claim 12 wherein receiving simulation instructions from the tele-operation system includes receiving instructions that describe a new object representation for the simulation of the external environment, and wherein updating the simulation of the external environment based on the simulation instructions includes applying the simulation instructions to add the new object representation to the simulation of the external environment, the new object representation corresponding to a real-world counterpart in the external environment characterized, at least in part, by the data collected by at least one sensor on-board the robot body. 14. The tele-operation system of claim 12 wherein the processor-executable instructions that, when executed by the at least one processor, cause the tele-operation system to receive simulation instructions from the at least one user of the tele-operation system, cause the tele-operation system to receive instructions that describe a modification to at least one object representation in the simulation of the external environment, and wherein the processor- executable instructions that, when executed by the at least one processor, cause the tele- operation system to update the simulation of the external environment based on the simulation instructions, cause the tele-operation system to apply the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. 19 19. The robot system of claim 1, further comprising: data and/or processor-executable instructions stored in the at least one non- transitory processor-readable storage medium that, when executed by the at least one processor, cause the robot system to: provide additional data collected by at least one sensor of the robot body to the tele-operation system; receive additional simulation instructions from the tele-operation system; and re-updating the simulation of the external environment based on the additional simulation instructions. 1. This is just a repetition of steps from claim 1 with new data. 20 20. A computer program product comprising data and/or processor- executable instructions stored in a non-transitory processor-readable storage medium, the data and/or processor-executable instructions which, when the non-transitory processor-readable storage medium is communicatively coupled to at least one processor of a robot system and the at least one processor executes the data and/or processor-executable instructions, cause the robot system to: [Execute operations of claim 1] 18. A computer program product comprising data and/or processor- executable instructions stored in a non-transitory processor-readable storage medium, the data and/or processor-executable instructions which, when the non-transitory processor-readable storage medium is communicatively coupled to at least one processor of a tele-operation system and the at least one processor executes the data and/or processor-executable instructions, cause the tele-operation system to: [Execute operations of claim 1] Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Physically Remote The term “physically remote” in independent claims 1, 5, 12, 15, and 20 is a relative term which renders the claim indefinite. The term “physically remote” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The Robot System of Claim 1 Claim 19 depends from claim 1, but it appears it was intended to depend from claim 12. Claim 1 recites a method, and claim 12 recites a robot system. Further, the elements of claim 19 are apparatus elements. As it stands, it appears that claim 19 recites both apparatus elements and process steps, which under MPEP 2173.05(p)(II), is properly rejected under 35 USC 112(b). Dependent claims that depend from rejected claims are rejected based on their dependency. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Software Per Se Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims are directed to software per se. While the claims recite a non-transitory medium, the claims are directed to the data stored thereon and do not positively recite the medium as an element of the claim. Accordingly, the claim, as drafted, is only directed to software… per se. Subject Matter Eligibility Claims 1-20 are rejected under 35 U.S.C. 101 as ineligible subject matter. Independent Claims Claim 12 (Statutory Category – Machine) Step 2A – Prong 1: Judicial Exception Recited? Yes, the claims recite a mental process. Claim 12 recites: updating the simulation of the external environment based on the simulation instructions. (Evaluation/Mental Process – Updating a simulation (e.g., modifying an image) can practically be performed in the mind or with aid of pen, paper, and/or a calculator.) Claim 12 recites a mental process, which is an abstract idea. Claim 12 recites an abstract idea. Step 2A – Prong 2: Integrated into a Practical Solution? No. The additional limitations: a robot body; at least one sensor carried by the robot body; at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to: These are generic computing elements recited at a high level of generality, and, under MPEP 2106.05(f), fail to integrate the abstract idea into a practical application. load a simulation of an external environment of the robot body; provide data collected by at least one sensor on-board the robot body to a tele-operation system that is physically remote from the robot body; receive simulation instructions from the tele-operation system; and These steps are mere data gathering similar to the MPEP 2106.05(g) examples: “e.g., a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent” “iv. Obtaining information about transactions using the Internet to verify credit card transactions,” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display,” “v. Consulting and updating an activity log, Ultramercial.” Mere data gather is insignificant extra-solution activity and, under MPEP 2106.05(g), fails to integrate the abstract idea into a practical application. Should it be found otherwise with respect to the load step, loading is a generic computing operation recited in the claim at a high level, and, therefore, under MPEP 2106.05(f), fails to integrate the abstract idea into a practical application. Any recited data merely limits the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fails to integrate the abstract idea into a practical application. None of the additional limitations of claim 12, whether in isolation or combination, integrate the abstract idea into a practical application. Accordingly, claim 12 is directed to the abstract idea. Step 2B: Claim provides an Inventive Concept? No. The additional limitations: a robot body; at least one sensor carried by the robot body; at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to: These are generic computing elements recited at a high level of generality, and, under MPEP 2106.05(f), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. load a simulation of an external environment of the robot body; provide data collected by at least one sensor on-board the robot body to a tele-operation system that is physically remote from the robot body; receive simulation instructions from the tele-operation system; and These steps are well-understood, routine, conventional (WURC) activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “i. Determining the level of a biomarker in blood by any means” “vi. Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price.” Because these steps are WURC and, as previously demonstrated, insignificant extra-solution activity, under MPEP 2106.05(d) and 2106.05(g), the steps to fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. Should it be found otherwise with respect to the load step, loading is a generic computing operation recited in the claim at a high level, and, therefore, under MPEP 2106.05(f), fails to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. Any recited data merely limits the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fails to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. None of the additional limitations of claim 12, whether in isolation or combination, combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. Claim 12 is ineligible. Claim 1 (Statutory Category – Process) Regarding claim 1, claim 1 recites the method executed by the configuration of the system of claim 12 and is rejected for the same reasons as claim 12. Claim 12 is ineligible. Claim 20 (Statutory Category – None, Software Per Se) Regarding claim 20, claim 20 is software per se and does not belong to one of the four categories. However, in the interest of compact prosecution, these claims will be addressed for eligibility as if the Applicant has amended the claims to positively recite the non-transitory CRM as an element. Claim 20 is the software and likely intended to be the CRM of claim 12, so claim 20 is rejected for the same reasons as claim 12. Claim 20 is ineligible. Dependent Claims The dependent claims are also ineligible for the following reasons. Note that elements recognized as generic computing elements in the independent claims fail to confer eligibility under MPEP 2106.05(f) for the same reasons in the dependent claims. Similarly, the data description specific to the technological environment merely restrict the abstract idea to a particular technological environment and, under MPEP 2106.05(h), fail to confer eligibility. Claims 2 and 13 further comprising: training an artificial intelligence to autonomously update the simulation based at least in part on data collected by at least one sensor on-board the robot body. This training is a generic computing process described at a high level of generality, especially since there is no recitation of how the training is conducted based on the steps of the independent claims. Therefore, under MPEP 2106.05(f), this fails to confer eligibility. Claims 2 and 13 fail to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claims 2 and 13 are ineligible Claim 3 storing the artificial intelligence in a non-transitory processor-readable storage memory on-board the robot body. The storing is mere data gathering and WURC and fails to confer eligibility for at least the same reasons as the providing and receiving steps of the independent claims. Also, the use of a generic CRM is the use of a generic computing element recited at a high level of generality and fails to confer eligibility under MPEP 2106.05(f). Claim 3 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 3 is ineligible Claims 4 and 14 wherein training an artificial intelligence to autonomously update the simulation based at least in part on data collected by at least one sensor on-board the robot body includes defining an objective function that updates the simulation to minimize discrepancies between the simulation and the data collected by at least one sensor on-board the robot body and optimizing the objective function by the robot system. Determining an objective function and optimizing the objective function are evaluations, which are mental processes practically performable in the mind or with aid of pen, paper, and calculator, and which are mathematical calculations, which are mathematical concepts. Mental processes and mathematical calculations are abstract ideas. These abstract ideas merge with the abstract idea of the claim(s) from which this claim depends, and there remain no additional limitations to confer eligibility at Step 2A, Prong 2 and Step 2B. Claims 4 and 14 fail to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claims 4 and 14 are ineligible Claims 5 and 15 further comprising: training the robot system to autonomously update the simulation of the external environment based on multiple iterations of: providing data collected by at least one sensor on-board the robot body to a tele-operation system that is physically remote from the robot body; receiving simulation instructions from the tele-operation system; and updating the simulation of the external environment based on the simulation instructions. The training is a generic computing operation recited at a high level of generality. This is especially true as there is no indication as to how the robot system is trained using the iteratively repeated steps. For these reasons, the training step fails to confer eligibility under MPEP 2106.05(f). The repeated iterations of the steps of the independent claims fail to confer eligibility for the same reasons as the first iteration of the steps rejected with respect to the independent claims. Claims 5 and 15 fail to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claims 5 and 15 are ineligible Claim 6 wherein receiving simulation instructions from the tele-operation system includes receiving instructions that describe a modification to the simulation of the external environment. This merely qualifies the receiving step and fails to confer eligibility for the same reasons as the receiving steps of the independent claims. Also, the type of data received merely limits the abstract idea to the particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 6 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 6 is ineligible Claim 7 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to the simulation of the external environment to cause the simulation of the external environment to more closely match a reality of the external environment. This qualifies the updating step of the independent claims, which is an element of the abstract idea. Therefore, this is an element of the abstract idea for at least the same reasons as the updating step of the independent claims. Therefore, this claim provides no additional limitations. Claim 7 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 7 is ineligible Claim 8 wherein receiving instructions that describe a modification to the simulation of the external environment includes receiving instructions that describe a modification to at least one object representation in the simulation of the external environment. This merely qualifies the receiving step and fails to confer eligibility for the same reasons as the receiving steps of the independent claims. Also, the type of data received merely limits the abstract idea to the particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 8 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 8 is ineligible Claim 9 wherein updating the simulation of the external environment based on the simulation instructions includes applying the modification to at least one object representation in the simulation of the external environment to cause the at least one object representation to more closely resemble a corresponding real-world counterpart object in the external environment. This qualifies the updating step of the independent claims, which is an element of the abstract idea. Therefore, this is an element of the abstract idea for at least the same reasons as the updating step of the independent claims. Therefore, this claim provides no additional limitations. Claim 9 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 9 is ineligible Claims 10 and 18 wherein receiving simulation instructions from the tele-operation system includes receiving instructions that describe a new object representation for the simulation of the external environment, and This merely qualifies the receiving step and fails to confer eligibility for the same reasons as the receiving steps of the independent claims. Also, the type of data received merely limits the abstract idea to the particular technological environment and, under MPEP 2106.05(h), fails to confer eligibility at Step 2A, Prong 2 and Step 2B. wherein updating the simulation of the external environment based on the simulation instructions includes applying the simulation instructions to add the new object representation to the simulation of the external environment, the new object representation corresponding to a real-world counterpart in the external environment characterized, at least in part, by the data collected by at least one sensor on- board the robot body. This qualifies the updating step of the independent claims, which is an element of the abstract idea. Therefore, this is an element of the abstract idea for at least the same reasons as the updating step of the independent claims. Therefore, this claim provides no additional limitations. Claims 10 and 18 fail to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claims 10 and 18 are ineligible Claims 11 and 19 further comprising: providing additional data collected by at least one sensor on-board the robot body to the tele-operation system; receiving additional simulation instructions from the tele-operation system; and re-updating the simulation of the external environment based on the additional simulation instructions. This a repetition of the steps in the independent claims and this fails to confer eligibility for the same reasons as the corresponding steps of the independent claims in the first iteration. Claims 11 and 19 fail to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claims 11 and 19 are ineligible Claim 16 Claim 16 recites a combination of features presented in claims 6 and 7 that fails to confer eligibility for the same reasons as demonstrated for the features of claims 6 and 7. Claim 16 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 16 is ineligible Claim 17 Claim 17 recites a combination of features presented in claims 8 and 9 that fails to confer eligibility for the same reasons as demonstrated for the features of claims 8 and 9. Claim 17 fails to recite any additional limitations that confer eligibility at Step 2A, Prong 2 and Step 2B. Claim 17 is ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 11-15, and 19-20: Reddy Claim(s) 1-5, 11-15, and 19-20 are rejected under 35 U.S.C. 102(a1)/(a2) as being anticipated by NPL: “Shared Autonomy via Deep Reinforcement Learning” by Reddy et al. (Reddy). Claims 1, 12, and 20 Regarding claim 12, Reddy teaches: A robot system comprising: a robot body; at least one sensor carried by the robot body; at least one processor; and at least one non-transitory processor-readable storage medium communicatively coupled to the at least one processor, the at least one non-transitory processor-readable storage medium storing data and/or processor-executable instructions that, when executed by the at least one processor, cause the robot system to: (Reddy See Page 1, Fig. 1 – The robot body is a quadcopter, Fig. 1(b) shows a person on a computer with memory and a processor used to control the quadcopter and simulation, and Fig. 1(b) shows the picture captured by the quadcopter’s camera. Page 7, VII “Users are only allowed to look through the drone’s first-person camera to navigate, and are blocked from getting a third-person view of the drone.”) PNG media_image1.png 318 500 media_image1.png Greyscale load a simulation of an external environment of the robot body; (Reddy See Page 1, Fig. 1 – Fig. 1(a) shows the simulated environment, which, in this case, is a lunar lander game with a simulated landing pad corresponding to a real landing pad shown Fig. 1(c). Page 7, VII. “Humans find it challenging to simultaneously point the camera at the desired scene and navigate to the precise location of a feasible landing pad under time constraints.”) PNG media_image2.png 133 137 media_image2.png Greyscale PNG media_image3.png 144 203 media_image3.png Greyscale provide data collected by at least one sensor on-board the robot body to a tele- operation system that is physically remote from the robot body; (Reddy See Page 1, Fig. 1 – Fig. 1(b) shows a person on a computer cooperating with the robot system to operate the quadcopter remotely. Page 7, VII “Users are only allowed to look through the drone’s first-person camera to navigate, and are blocked from getting a third-person view of the drone.”) receive simulation instructions from the tele-operation system; and (Reddy Page 7, Right Column, VII User Study With A Physical Robot “To evaluate our method in a more realistic environment, we formulate a “perching” task for a real human flying a real quadrotor: land the vehicle on a level, square landing pad at some distance from the initial take-off position, such that the drone’s first-person camera is pointed at a specific object in the drone’s surroundings, without flying out of bounds or running out of time. Perching a drone at an arbitrary vantage point enables it to be used as a mobile security camera for surveillance applications. Humans find it challenging to simultaneously point the camera at the desired scene and navigate to the precise location of a feasible landing pad under time constraints. An assistive copilot has little trouble navigating to and landing on the landing pad, but does not know where to point the camera because it does not know what the human wants to observe after landing. Together, the human can focus on pointing the camera and the copilot can focus on landing precisely on the landing pad.” – The user provides instructions via the tele-operation system to the robot as to where to direct the camera to provide the user with a desired visual perspective.) update the simulation of the external environment based on the simulation instructions. update the simulation of the external environment based on the simulation instructions; and (Reddy Page 7, Right Column, Last Paragraph -Page 8, Right Column, First Paragraph “We fly the Parrot AR-Drone 2 in an indoor flight room equipped with a Vicon motion capture system to measure the position and orientation of the drone as well as the position of the landing pad. Users are only allowed to look through the drone’s first-person camera to navigate, and are blocked from getting a third-person view of the drone. Each episode lasts at most 30 seconds. An episode begins when the drone finishes taking off. An episode ends when the drone lands, flies out of bounds, or time runs out.” – The simulation, depending on the action, orients the simulated robot in the simulated space, e.g., relative to the boundaries of the simulation.) Regarding claim 1, claim 1 teaches the method executed by the system of claim 12, so claim 1 is rejected for at least the same reasons as claim 12. Regarding claim 20, claim 20 teaches a CRM that is an embodiment of the storage media/memory of claim 12, with configuration to execute the same operations, so claim 20 is rejected for at least the same reasons. Claims 2 and 13 Regarding claim 13, Reddy teaches: The robot system of claim 12, further comprising: data and/or processor-executable instructions stored in the at least one non- transitory processor-readable storage medium that, when executed by the at least one processor, cause the robot system to train an artificial intelligence to autonomously update the simulation based at least in part on data collected by at least one sensor of the robot body. (Reddy Page 3, Right Column, Method Overview “Our method takes observations of the environment and the user’s controls or inferred goal (when available) as input, and produces a high value action or control output that is as close as possible to the user’s control. We learn state-action values via Q-learning with neural network function approximation. In this section, we will describe how the agent combines user input with environmental observations, motivate and describe our choice of deep Q-learning for training the agent, and describe how the agent shares control with the user.”- The system is trained to autonomously update the simulation based on iterations of the steps taught in the independent claims.) Regarding claim 2, claim 2 recites the operations performed by the system of claim 13 and is rejected for at least the same reasons as claim 13. Claim 3 Regarding claim 3, Reddy teaches the features of claim 2. The method of claim 2, further comprising: storing the artificial intelligence in a non-transitory processor-readable storage memory on-board the robot body. (Reddy Page 3, B. Method Overview “Our method takes observations of the environment and the user’s controls or inferred goal (when available) as input, and produces a high value action or control output that is as close as possible to the user’s control. We learn state-action values via Q-learning with neural network function approximation. In this section, we will describe how the agent combines user input with environmental observations, motivate and describe our choice of deep Q-learning for training the agent, and describe how the agent shares control with the user. -The agent is the robot, and the robot, as an independent agent, is responsible for providing its own training to use in interpreting how to use the user’s cooperative input. Page 5, Left Column, Second Paragraph “The agent uses a multi-layer perceptron with two hidden layers of 64 units each to approximate the Q function […]. The action-similarity function […] in the agent’s behavior policy counts the number of dimensions in which actions a and ah agree […].” – The agent/robot itself uses the machine learning model, so it must have memory that stores elements of the artificial intelligence. Claims 4 and 14 Regarding claim 14, Reddy teaches: The robot system of claim 13 wherein the data and/or processor- executable instructions that, when executed by the at least one processor, cause the robot system to train an artificial intelligence to autonomously update the simulation based at least in part on data collected by at least one sensor of the robot body, cause the robot system to define an objective function that updates the simulation to minimize discrepancies between the simulation and the data collected by at least one sensor of the robot body and optimize the objective function. (Reddy Page 5, Left Column, Last Paragraph “The agent uses a multi-layer perceptron with two hidden layers of 64 units each to approximate the Q function […]. The action-similarity function […] in the agent’s behavior policy counts the number of dimensions in which actions a and ah agree […]. As discussed earlier in Section IV, the agent’s reward function is composed of a hard-coded function Rgeneral and a user-generated signal Rfeedback. Rgeneral penalizes speed and tilt, since moving fast and tipping over are generally dangerous for any pilot regardless of their intent. Rfeedback emits a large positive reward at the end of the episode if the vehicle.”; Also See the Q learning minimization on page 2, Left Column, First Paragraph; Page 2, Left Column, First Paragraph “One algorithm for solving this problem is Q-learning [33], which minimizes the Bellman error of the Q function, […] as a proxy for maximizing return.”– This teaches an objective function that is used to train the artificial intelligence to autonomously update the simulation based on sensor data from the quadcopter. The discrepancies between the simulation and the sensor data collected are minimized for optimization of the objective function.) Regarding claim 4, claim 4 recites operations similar to the operations conducted by the system of claim 14 and is rejected for at least the same reasons as claim 14. Claims 5 and 15 Regarding claim 15, Reddy teaches the features of claim 12 and further teaches: data and/or processor-executable instructions stored in the at least one non- transitory processor-readable storage medium that, when executed by the at least one processor, cause the robot system to: train the robot system to autonomously update the simulation of the external environment based on multiple iterations of: (Reddy Page 5, Left Column, Second Paragraph “The agent uses a multi-layer perceptron with two hidden layers of 64 units each to approximate the Q function […]. The action-similarity function […] in the agent’s behavior policy counts the number of dimensions in which actions a and ah agree […]. As discussed earlier in Section IV, the agent’s reward function is composed of a hard-coded function Rgeneral and a user-generated signal Rfeedback. Rgeneral penalizes speed and tilt, since moving fast and tipping over are generally dangerous for any pilot regardless of their intent. Rfeedback emits a large positive reward at the end of the episode if the vehicle.” – The training operates both using the pilot and without using the pilot, so it is trained to cooperate with humans or be full automated. Regardless, the trained model updates the simulation based on iterative determinations with the Q function. providing data collected by at least one sensor of the robot body to a tele- operation system that is physically remote from the robot body; receiving simulation instructions from the tele-operation system; and updating the simulation of the external environment based on the simulation instructions. (Reddy – These are repetitions of the steps of claim 1 and are rejected for
Read full office action

Prosecution Timeline

Jun 22, 2022
Application Filed
Oct 14, 2025
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
12%
Grant Probability
99%
With Interview (+100.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month