Prosecution Insights
Last updated: April 19, 2026
Application No. 18/507,020

ENFORCING ROBOTIC SAFETY CONSTRAINTS BASED ON AI GENERATED SAFETY DESCRIPTIONS

Final Rejection §101§103
Filed
Nov 10, 2023
Examiner
KASPER, BYRON XAVIER
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
3Laws Robotics Inc.
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
72 granted / 103 resolved
+17.9% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
36 currently pending
Career history
139
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 103 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This communication is responsive to Application No. 18/507,020 and the amendments filed on 12/9/2025. 3. Claims 1-23 are presented for examination. Information Disclosure Statement 4. The information disclosure statements (IDS) submitted on 11/18/2024, 11/29/2024, and 1/18/2026 have been fully considered by the Examiner. Response to Arguments 5. Applicant's arguments filed 12/9/2025 with respect to the rejection of claims 1-20 under 35 U.S.C. 101 have been fully considered but they are not persuasive. Regarding independent claim 1, the Applicant argues that the amended claim recites more than an abstract idea and recites a practical application. However, the Examiner respectfully disagrees for the following reasons. On pages 7-10 of the Applicant’s remarks filed 12/9/2025, the Applicant argues that the amended claims do not recite an abstract idea. While the Examiner agrees that the claims do not recite mathematical concepts or certain methods of organizing human activities, the claims still recite abstract ideas in the form of mental processes, wherein the limitations of the claim can be performed within the human mind. The Applicant states that the computer components of the claim go beyond the scope of “generic,” however, as claimed and recited in the specification of the instant application, the Examiner has determined that all recited computer components of the claim are in fact generic. The Applicant also argues that the additional elements recite a judicial exception by incorporating an improvement in the functioning of a computer. However, the Examiner does not see an improvement to any of the generic computer components, as ‘improvements’ are defined in section 2106.05(a) of the MPEP. Further, on pages 10-11 of the Applicant’s remarks filed 12/9/2025, the Applicant argues that the claim as a whole recites an inventive concept. However, even taking into consideration the claim as a whole, an inventive concept or improvement to the technology is not found. Therefore, for these reasons, the Examiner does not find the Applicant’s remarks regarding the rejection of claim 1 under 35 U.S.C. 101 as persuasive, and maintains the rejection, in which will also be described in more detail below. Regarding independent claims 11 and 19, as these claims contain similar limitations as claim 1, are still rejected for similar reasons claim 1 is, in which will be described later. Regarding dependent claims 2-10, 12-18, and 20, as all of these claims depend from either claims 1, 11, or 19, are still rejected, in which will be described later. 6. Applicant’s arguments with respect to the rejection of claim(s) 1-20 under 35 U.S.C. 102 and/or 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Regarding independent claim 1, the Examiner agrees that US 20230311335 A1 to Hausman fails to teach all of the amendments to the claim. However, in light of the amendments and the Applicant’s remarks, an updated search was conducted, and a new ground of rejection concerning claim 1 has been determined, in which will be described later. Regarding independent claims 11 and 19, as these claims contain similar limitations to claim 1, are still rejected for similar reasons as claim 1 is, in which will be described later. Regarding dependent claims 2-10, 12-18, and 20, as all of these claims depend from either claims 1, 11, or 19, are still rejected, in which will be described later. 7. Examiner notes Applicant's request for interview, if any issues remain that would prevent allowance of the application, however, in light of the new ground of rejection in view of the newly found prior art, which was necessitated based on Applicant's amendments to the claims, Examiner would like to provide the Applicant with an opportunity to review the new grounds of rejection in view of said newly found prior art, prior to an interview. Once the Applicant has reviewed the new grounds of rejection, Examiner is willing to conduct an interview to discuss any remaining issues with the application that Applicant would like to discuss. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 8. Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent Claim 1: 101 Analysis: Step 1 Is the claim directed to a process, machine, manufacture, or composition of matter? Claim 1 is directed to a method for ensuring safe execution of robotic commands by a robot (i.e., a process), and therefore is within at least one of the four statutory categories. 101 Analysis: Step 2A Prong I Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Regarding Claim 1, the claim is determined to fall under the category of an abstract idea, defined as the following: a mathematical concept, certain methods of organizing human activity, and/or mental processes. Claim 1 recites: A method for ensuring safe execution of robotic commands by a robot, the method comprising: receiving a description of an environment; forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment, and wherein the prompt is constituted into a computer readable storage area; responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment; generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing the modified planning signals to the robot. Under the Examiner’s broadest reasonable interpretation, the phrases bolded above in claim 1 recite mental processes, where the limitations can be performed in the human mind. With regards to ‘forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment,’ in the context of this claim is an abstract idea, where a human forms (i.e., generates, creates, thinks of, etc.) a prompt to input into a generic artificial intelligence entity module. Humans have the ability, entirely within the mind, to think of and create sentence/description prompts describing the scene of the current environment. For example, if a bedroom contains a bed and a dresser, a human can reasonably form prompts within the mind based on this observed information, for example, thinking “the bed is located next to the window” or “the dresser is located on the opposite side of the room from the bed.” As paragraph [0073] of the specification of the instant application describes the different types of prompts, these formed prompts include the format of textualized natural language descriptions and audible natural language descriptions (e.g., speech) from the user. Thus, the Examiner submits that this step of forming prompts can be done entirely within the human mind. With regards to ‘generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints,’ in the context of this claim is an abstract idea, where a human classifies and labels (i.e., organizing items based on observed features) the responses from the LLM response. Humans have the ability to recognize and organize data into certain categories between data, which may be performed within the human mind. For example, if half of the LLM responses were about controlling a robot’s position in the environment and the other half were about controlling the robot’s speed within the environment, using the mind, the human mind can recognize these differences and organize the LLM responses into those pertaining to position and those pertaining to speed by association. In addition, regarding the sub-portions corresponding to one or more controllable condition safety constraints, the human mind has the ability to recognize safety constraints regarding outputs. For example, if a vase is recognized in the environment, a human can determine to reduce speed and keep a buffer distance away from the vase by recognizing its properties, which may be performed entirely within the mind. With regards to ‘based on the labeling, modifying at least some of the sub-portions to generate modified planning signals,’ in the context of this claim is an abstract idea, where a human modifies (i.e., changes, adjusts, etc.) some of the sub-portions of the LLM responses to create modified planning signals. Based on observed data, humans have the ability within the mind to adjust a plan. Using the example above, for the LLM responses labeled with respect to the speed of the robot, if a human observes that some of the responses say to go slow around the dresser, then the human can modify a plan that the robot should move slowly around the dresser, all within the human mind. The Examiner notes that this plan is not necessarily performed by the robot, and instead, may be entirely conceptual. 101 Analysis: Step 2A Prong II Does the claim recite any additional elements that integrate the judicial exception into a practical application? The additional elements of claim 1 do not recite the judicial exception into a practical application. The additional elements of claim 1, as shown below, are underlined, while the abstract ideas of the claim are bolded. Claim 1 recites: A method for ensuring safe execution of robotic commands by a robot, the method comprising: receiving a description of an environment; forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment, and wherein the prompt is constituted into a computer readable storage area; responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment; generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing the modified planning signals to the robot. The Examiner has determined that the additional elements of the claim underlined above do not integrate the abstract ideas listed above into a practical application. Regarding the limitation “receiving a description of an environment,” this is simply an insignificant pre-solution activity in the form of data gathering, with nothing else stated to bring it above this. Regarding the limitation “wherein the prompt is constituted into a computer readable storage area,” this is simply reciting the use of generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claim. As the storage area is defined in paragraphs [0166] and [0167] of the specification of the instant application, this simply recites generic computing elements, with no special or unique features or functions of any of the examples to overcome this conclusion. Regarding the limitation, “responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment,” this is simply data gathering in the form of receiving a response, the response being said data gathered. Regarding the limitation “providing the modified planning signals to the robot,” this step does not integrate the abstract ideas of the claim into a practical application because it is simply applying the abstract ideas described above to a generically recited robot by providing them, with nothing more. The Examiner notes that the robot does not necessarily perform actions using the modified planning signals provided to it. Thus, for the additional elements of claim 1 analyzed individually, there is insufficient reasoning as to why the additional elements turn the abstract ideas into practical applications. Furthermore, looking at the additional elements with respect to the whole claim, do not add any more reasoning as to why the additional elements justify a practical application. Taken as a whole, the additional elements recite the insignificant extra-solution activity of data gathering, the use of generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claim, and simply “applying” the abstract ideas of the claim to a generically recited robot, without anything more to overcome this. Accordingly, the additional limitation(s) do/does not integrate the abstract ideas into a practical application because it does not impose any meaningful limits on practicing the abstract ideas. 101 Analysis: Step 2B Does the claim recite any additional elements that amount to significantly more than the judicial exception? With regards to step 2B of the 101 analysis, claim 1 does not recite any additional elements that amount to significantly more than the judicial exception for the same reasons as described above in step 2A prong II of the 101 analysis. With regards to the steps of receiving a description of the environment and receiving an LLM response, these are simply insignificant extra-solution activities in the form of data gathering, with nothing else recited to bring them above being simply this. Further, with regards to the step of storing the prompt into a computer readable storage area, this is simply using generically recited computing elements to perform the abstract ideas of the claim, wherein the computing elements do not recite any special/unique structures/elements to overcome this. Further, with regards to the step of “providing the modified planning signals to the robot,” the Examiner submits that this is simply an “apply it” step, by simply applying the abstract ideas of the claim to a generically recited robot, without stating anything above this to recite a practical application. Generally applying an exception using insignificant extra-solution activities, generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claim, and merely the step of “apply it” cannot provide an inventive concept. Dependent claims 2-10 and 21 do not recite further limitations that cause the claim to be patent eligible. Rather, the limitations of the dependent claims further are directed toward additional aspects of the judicial exception and/or well-understood, routine, and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-10 and 21 are not patent eligible under the same rational as provided for in the rejection of independent claim 1. Regarding Claim 2, “wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. As the supervisory agent is described within paragraphs [00190] – [00193] of the specification of the instant application and Figure 8C of the drawings, is a generically recited computer module configured to perform the abstract ideas of classifying the LLM responses and generating the modified planning signals, as described above in claim 1. Regarding Claim 3, “further comprising labeling portions of the robotic planning signals with a label that carries semantics of at least one of, a “do not approach” semantic, an “actively avoid” semantic, or a collision tolerant subrange,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the abstract idea of ‘labeling’ to include specific semantics regarding object types. However, the Examiner submits that the human mind can identify certain categories of objects and label them as such based on observations. Regarding Claim 4, “wherein at least some portions of the LLM response are interpreted as referring to a false object that is deemed to be collision tolerant,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the abstract idea of classifying, the classifying being that an object is collision tolerant, to which the Examiner submits can be performed within the human mind. Regarding Claim 5, “wherein at least some portions of the LLM response are interpreted using at least one of, natural language processing, or image processing,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim recites the abstract idea of interpreting based on interpreting natural language or images provided, to which the Examiner submits this step of interpreting can be performed within the human mind. Regarding Claim 6, “wherein the description of the environment is given in one or more of, a textualized natural language description, an audible natural language description, or an image,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the gathered data in the form of the description of the environment, but this definition does not bring the gathered data into a practical application. Regarding Claim 7, “further comprising forming a further large language model prompt to the artificial intelligence entity module, wherein the further large language model prompt is based at least in part on the LLM response,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply recites the abstract idea of forming an additional LLM prompt based on a response, to which the Examiner submits is an abstract idea for the same reasons as described above in claim 1. Regarding Claim 8, “further comprising forming a further large language model prompt to a further artificial intelligence entity module, and wherein the further large language model prompt to the further artificial intelligence entity module comprises at least one image,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply recites the extra-solution activity of data gathering in the form of taking a gathered image and providing it to the artificial intelligence entity, as this process is described in paragraph [0073] of the specification of the instant application. Regarding Claim 9, “wherein the further artificial intelligence entity module operates in an image mode,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. As this process is described in paragraph [0073] of the specification of the instant application, is simply data gathering in the form of the artificial intelligence entity receiving images, with nothing more. Regarding Claim 10, “further comprising requesting the LLM to produce a response that includes one or more safe operation limits or one or more controllable condition safety constraints,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply recites the abstract idea in the form of a request (i.e., forming a specific type of LLM prompt) for the LLM prompt’s response, to which the Examiner submits is an abstract idea for the same reasons as described above in claim 1. Regarding Claim 21, “wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the controllable condition safety constraints applied to the abstract idea of ‘labeling,’ however, both examples may reasonably still be labeled by using the human mind alone. Independent Claim 11: 101 Analysis: Step 1 Is the claim directed to a process, machine, manufacture, or composition of matter? Claim 11 is directed to a non-transitory computer readable medium having stored thereon a sequence of instructions (i.e., a machine), and therefore is within at least one of the four statutory categories. 101 Analysis: Step 2A Prong I Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Regarding Claim 11, the claim is determined to fall under the category of an abstract idea, defined as the following: a mathematical concept, certain methods of organizing human activity, and/or mental processes. Claim 11 recites: A non-transitory computer readable medium having stored thereon a sequence of instructions which, when stored in memory and executed by one or more processors causes the one or more processors to perform a set of acts for ensuring safe execution of robotic commands by a robot, the set of acts comprising: receiving a description of an environment; forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment, and wherein the prompt is constituted into a computer readable storage area; responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment; generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing or the modified planning signals to the robot. Under the Examiner’s broadest reasonable interpretation, the phrases bolded above in claim 11 recite mental processes, where the limitations can be performed in the human mind. With regards to ‘forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment,’ in the context of this claim is an abstract idea, where a human forms (i.e., generates, creates, thinks of, etc.) a prompt to input into a generic artificial intelligence entity module. Humans have the ability, entirely within the mind, to think of and create sentence/description prompts describing the scene of the current environment. For example, if a bedroom contains a bed and a dresser, a human can reasonably form prompts within the mind based on this observed information, for example, thinking “the bed is located next to the window” or “the dresser is located on the opposite side of the room from the bed.” As paragraph [0073] of the specification of the instant application describes the different types of prompts, these formed prompts include the format of textualized natural language descriptions and audible natural language descriptions (e.g., speech) from the user. Thus, the Examiner submits that this step of forming prompts can be done entirely within the human mind. With regards to ‘generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints,’ in the context of this claim is an abstract idea, where a human classifies and labels (i.e., organizing items based on observed features) the responses from the LLM response. Humans have the ability to recognize and organize data into certain categories between data, which may be performed within the human mind. For example, if half of the LLM responses were about controlling a robot’s position in the environment and the other half were about controlling the robot’s speed within the environment, using the mind, the human mind can recognize these differences and organize the LLM responses into those pertaining to position and those pertaining to speed by association. In addition, regarding the sub-portions corresponding to one or more controllable condition safety constraints, the human mind has the ability to recognize safety constraints regarding outputs. For example, if a vase is recognized in the environment, a human can determine to reduce speed and keep a buffer distance away from the vase by recognizing its properties, which may be performed entirely within the mind. With regards to ‘based on the labeling, modifying at least some of the sub-portions to generate modified planning signals,’ in the context of this claim is an abstract idea, where a human modifies (i.e., changes, adjusts, etc.) some of the sub-portions of the LLM responses to create modified planning signals. Based on observed data, humans have the ability within the mind to adjust a plan. Using the example above, for the LLM responses labeled with respect to the speed of the robot, if a human observes that some of the responses say to go slow around the dresser, then the human can modify a plan that the robot should move slowly around the dresser, all within the human mind. The Examiner notes that this plan is not necessarily performed by the robot, and instead, may be entirely conceptual. 101 Analysis: Step 2A Prong II Does the claim recite any additional elements that integrate the judicial exception into a practical application? The additional elements of claim 11 do not recite the judicial exception into a practical application. The additional elements of claim 11, as shown below, are underlined, while the abstract ideas of the claim are bolded. Claim 11 recites: A non-transitory computer readable medium having stored thereon a sequence of instructions which, when stored in memory and executed by one or more processors causes the one or more processors to perform a set of acts for ensuring safe execution of robotic commands by a robot, the set of acts comprising: receiving a description of an environment; forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment, and wherein the prompt is constituted into a computer readable storage area; responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment; generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing or the modified planning signals to the robot. The Examiner has determined that the additional elements of the claim underlined above do not integrate the abstract ideas listed above into a practical application. Regarding the limitations of the processor and the memory, these are simply generic computing elements recited at a high level of generality that merely automate the abstract ideas applied to them. As the processor is described within paragraphs [00159], [00166], [00175], and [00176] of the specification of the instant application, and as the memory is described within paragraphs [00159], [00166], and [00170] – [00177], are simply generic computing elements, recited at a high level of generality, with no special or unique features/functions associated with them to bring them above this. Regarding the limitation “receiving a description of an environment,” this is simply an insignificant pre-solution activity in the form of data gathering, with nothing else stated to bring it above this. Regarding the limitation “wherein the prompt is constituted into a computer readable storage area,” this is simply reciting the use of generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claim. As the storage area is defined in paragraphs [0166] and [0167] of the specification of the instant application, this simply recites generic computing elements, with no special or unique features or functions of any of the examples to overcome this conclusion. Regarding the limitation, “responsive to prompting the artificial intelligence entity, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment,” this is simply data gathering in the form of receiving a response, the response being said data gathered. Regarding the limitation “providing the modified planning signals to the robot,” this step does not integrate the abstract ideas of the claim into a practical application because it is simply applying the abstract ideas described above to a generically recited robot by providing them, with nothing more. Thus, for the additional elements of claim 11 analyzed individually, there is insufficient reasoning as to why the additional elements turn the abstract ideas into practical applications. Furthermore, looking at the additional elements with respect to the whole claim, do not add any more reasoning as to why the additional elements justify a practical application. Taken as a whole, the additional elements recite generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claim, the insignificant extra-solution activity of data gathering, and simply “applying” the abstract ideas of the claim to a generically recited robot, without anything more to overcome this. Accordingly, the additional limitation(s) do/does not integrate the abstract ideas into a practical application because it does not impose any meaningful limits on practicing the abstract ideas. 101 Analysis: Step 2B Does the claim recite any additional elements that amount to significantly more than the judicial exception? With regards to step 2B of the 101 analysis, claim 11 does not recite any additional elements that amount to significantly more than the judicial exception for the same reasons as described above in step 2A prong II of the 101 analysis. With regards to the processor, memory, and computer readable storage area, as these elements are described within the specification of the instant application, are simply generic computing elements recited at a high level of generality, with no special or unique features or functions to bring it above this. Further, with regards to the steps of receiving a description of the environment and receiving an LLM response, these are simply insignificant extra-solution activities in the form of data gathering, with nothing else recited to bring them above being simply this. Further, with regards to the step of “providing the modified planning signals to the robot,” the Examiner submits that this is simply an “apply it” step, simply applying the abstract ideas of the claim to a generically recited robot, without stating anything above this to recite a practical application. Generally applying an exception using generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claims, insignificant extra-solution activities, and merely the step of “apply it” cannot provide an inventive concept. Dependent claims 12-18 and 22 do not recite further limitations that cause the claim to be patent eligible. Rather, the limitations of the dependent claims further are directed toward additional aspects of the judicial exception and/or well-understood, routine, and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 12-18 and 22 are not patent eligible under the same rational as provided for in the rejection of independent claim 11. Regarding Claim 12, “wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. As the supervisory agent is described within paragraphs [00190] – [00193] of the specification of the instant application and Figure 8C of the drawings, is a generically recited computer module configured to perform the abstract ideas of classifying the LLM responses and generating the modified planning signals, as described above in claim 11. Regarding Claim 13, “further comprising instructions which, when stored in memory and executed by the one or more processors causes the one of more processors to perform acts of labeling portions of the robotic planning signals with a label that carries semantics of at least one of, a “do not approach” semantic, an “actively avoid” semantic, or a collision tolerant subrange,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the abstract idea of ‘labeling’ to include specific semantics regarding object types. However, the Examiner submits that the human mind can identify certain categories of objects and label them as such based-on observations. Regarding Claim 14, “wherein at least some portions of the LLM response are interpreted as referring to a false object that is deemed to be collision tolerant,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the abstract idea of classifying, the classifying being that an object is collision tolerant, to which the Examiner submits can be performed within the human mind. Regarding Claim 15, “wherein at least some portions of the LLM response are interpreted using at least one of, natural language processing, or image processing,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim recites the abstract idea of interpreting based on interpreting natural language or images provided, to which the Examiner submits this step of interpreting can be performed within the human mind. Regarding Claim 16, “wherein the description of the environment is given in one or more of, a textualized natural language description, an audible natural language description, or an image,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the gathered data in the form of the description of the environment, but this definition does not bring the gathered data into a practical application. Regarding Claim 17, “further comprising instructions which, when stored in memory and executed by the one or more processors causes the one or more processors to perform acts of forming a further large language model prompt to the artificial intelligence entity module, wherein the further large language model prompt is based at least in part on the LLM response,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply recites the abstract idea of forming an additional LLM prompt based on a response, to which the Examiner submits is an abstract idea for the same reasons as described above in claim 11. Regarding Claim 18, “further comprising instructions which, when stored in memory and executed by the one or more processors causes the one or more processors to perform acts of forming a further large language model prompt to a further artificial intelligence entity module, and wherein the further large language model prompt to the further artificial intelligence entity module comprises at least one image,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply recites the extra-solution activity of data gathering in the form of taking a gathered image and providing it to the artificial intelligence entity, as this process is described in paragraph [0073] of the specification of the instant application. Regarding Claim 22, “wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the controllable condition safety constraints applied to the abstract idea of ‘labeling,’ however, both examples may reasonably still be labeled by using the human mind alone. Independent Claim 19: 101 Analysis: Step 1 Is the claim directed to a process, machine, manufacture, or composition of matter? Claim 19 is directed to a system for ensuring safe execution of robotic commands by a robot (i.e., a machine), and therefore is within at least one of the four statutory categories. 101 Analysis: Step 2A Prong I Is the claim directed to a law of nature, a natural phenomenon, or an abstract idea? Regarding Claim 19, the claim is determined to fall under the category of an abstract idea, defined as the following: a mathematical concept, certain methods of organizing human activity, and/or mental processes. Claim 19 recites: A system for ensuring safe execution of robotic commands by a robot, the system comprising: a storage medium having stored thereon a sequence of instructions; and one or more processors that execute the sequence of instructions to cause the one or more processors to perform a set of acts, the set of acts comprising, receiving a description of an environment; forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment, and wherein the prompt is constituted into a computer readable storage area; responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment; generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing the modified planning signals to the robot. Under the Examiner’s broadest reasonable interpretation, the phrases bolded above in claim 19 recite mental processes, where the limitations can be performed in the human mind. With regards to ‘forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment,’ in the context of this claim is an abstract idea, where a human forms (i.e., generates, creates, thinks of, etc.) a prompt to input into a generic artificial intelligence entity module. Humans have the ability, entirely within the mind, to think of and create sentence/description prompts describing the scene of the current environment. For example, if a bedroom contains a bed and a dresser, a human can reasonably form prompts within the mind based on this observed information, for example, thinking “the bed is located next to the window” or “the dresser is located on the opposite side of the room from the bed.” As paragraph [0073] of the specification of the instant application describes the different types of prompts, these formed prompts include the format of textualized natural language descriptions and audible natural language descriptions (e.g., speech) from the user. Thus, the Examiner submits that this step of forming prompts can be done entirely within the human mind. With regards to ‘generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints,’ in the context of this claim is an abstract idea, where a human classifies and labels (i.e., organizing items based on observed features) the responses from the LLM response. Humans have the ability to recognize and organize data into certain categories between data, which may be performed within the human mind. For example, if half of the LLM responses were about controlling a robot’s position in the environment and the other half were about controlling the robot’s speed within the environment, using the mind, the human mind can recognize these differences and organize the LLM responses into those pertaining to position and those pertaining to speed by association. In addition, regarding the sub-portions corresponding to one or more controllable condition safety constraints, the human mind has the ability to recognize safety constraints regarding outputs. For example, if a vase is recognized in the environment, a human can determine to reduce speed and keep a buffer distance away from the vase by recognizing its properties, which may be performed entirely within the mind. With regards to ‘based on the labeling, modifying at least some of the sub-portions to generate modified planning signals,’ in the context of this claim is an abstract idea, where a human modifies (i.e., changes, adjusts, etc.) some of the sub-portions of the LLM responses to create modified planning signals. Based on observed data, humans have the ability within the mind to adjust a plan. Using the example above, for the LLM responses labeled with respect to the speed of the robot, if a human observes that some of the responses say to go slow around the dresser, then the human can modify a plan that the robot should move slowly around the dresser, all within the human mind. The Examiner notes that this plan is not necessarily performed by the robot, and instead, may be entirely conceptual. 101 Analysis: Step 2A Prong II Does the claim recite any additional elements that integrate the judicial exception into a practical application? The additional elements of claim 19 do not recite the judicial exception into a practical application. The additional elements of claim 19, as shown below, are underlined, while the abstract ideas of the claim are bolded. Claim 19 recites: A system for ensuring safe execution of robotic commands by a robot, the system comprising: a storage medium having stored thereon a sequence of instructions; and one or more processors that execute the sequence of instructions to cause the one or more processors to perform a set of acts, the set of acts comprising, receiving a description of an environment; forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment, and wherein the prompt is constituted into a computer readable storage area; responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment; generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing the modified planning signals to the robot. The Examiner has determined that the additional elements of the claim underlined above do not integrate the abstract ideas listed above into a practical application. Regarding the limitations of the processor and the storage medium, these are simply generic computing elements recited at a high level of generality that merely automate the abstract ideas applied to them. As the processor is described within paragraphs [00159], [00166], [00175], and [00176] of the specification of the instant application, and as the storage medium is described within paragraphs [00166] and [00170] – [00177], are simply generic computing elements, recited at a high level of generality, with no special or unique features/functions associated with them to bring them above this. Regarding the limitation “receiving a description of an environment,” this is simply an insignificant pre-solution activity in the form of data gathering, with nothing else stated to bring it above this. Regarding the limitation “wherein the prompt is constituted into a computer readable storage area,” this is simply reciting the use of generic computing elements recited at a high level of generality that merely automates the abstract ideas of the claim. As the storage area is defined in paragraphs [0166] and [0167] of the specification of the instant application, this simply recites generic computing elements, with no special or unique features or functions of any of the examples to overcome this conclusion. Regarding the limitation, “responsive to prompting the artificial intelligence entity, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment,” this is simply data gathering in the form of receiving a response, the response being said data gathered. Regarding the limitation “providing the modified planning signals to the robot,” this step does not integrate the abstract ideas of the claim into a practical application because it is simply applying the abstract ideas described above to a generically recited robot by providing them, with nothing more. Thus, for the additional elements of claim 19 analyzed individually, there is insufficient reasoning as to why the additional elements turn the abstract ideas into practical applications. Furthermore, looking at the additional elements with respect to the whole claim, do not add any more reasoning as to why the additional elements justify a practical application. Taken as a whole, the additional elements recite generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claim, the insignificant extra-solution activity of data gathering, and simply “applying” the abstract ideas of the claim to a generically recited robot, without anything more to overcome this. Accordingly, the additional limitation(s) do/does not integrate the abstract ideas into a practical application because it does not impose any meaningful limits on practicing the abstract ideas. 101 Analysis: Step 2B Does the claim recite any additional elements that amount to significantly more than the judicial exception? With regards to step 2B of the 101 analysis, claim 19 does not recite any additional elements that amount to significantly more than the judicial exception for the same reasons as described above in step 2A prong II of the 101 analysis. With regards to the processor, storage medium, and computer readable storage area, as these elements are described within the specification of the instant application, are simply generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claim, with no special or unique features or functions to bring it above this. Further, with regards to the steps of receiving a description of the environment and receiving an LLM response, these are simply insignificant extra-solution activities in the form of data gathering, with nothing else recited to bring them above being simply this. Further, with regards to the step of “providing the modified planning signals to the robot,” the Examiner submits that this is simply an “apply it” step, simply applying the abstract ideas of the claim to a generically recited robot, without stating anything above this to recite a practical application. Generally applying an exception using generic computing elements recited at a high level of generality that merely automate the abstract ideas of the claim, insignificant extra-solution activities, and merely the step of “apply it” cannot provide an inventive concept. Dependent claims 20 and 23 do not recite further limitations that cause the claim to be patent eligible. Rather, the limitations of the dependent claims further are directed toward additional aspects of the judicial exception and/or well-understood, routine, and conventional additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 20 and 23 are not patent eligible under the same rational as provided for in the rejection of independent claim 19. Regarding Claim 20, “wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. As the supervisory agent is described within paragraphs [00190] – [00193] of the specification of the instant application and Figure 8C of the drawings, is a generically recited computer module configured to perform the abstract ideas of classifying the LLM responses and generating the modified planning signals, as described above in claim 19. Regarding Claim 23, “wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle,” the dependent claim does not recite any additional elements that are significantly more than the judicial exception. The claim simply further defines the controllable condition safety constraints applied to the abstract idea of ‘labeling,’ however, both examples may reasonably still be labeled by using the human mind alone. In conclusion, as explained above, claims 1-23 are rejected under 35 U.S.C. 101 as ineligible subject matter related to an abstract idea, with insignificant additional elements to overcome the judiciary exception. Claim Rejections - 35 USC § 103 9. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 10. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 11. Claim(s) 1, 2, 5, 6, 7, 10, 11, 12, 15, 16, 17, and 19-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hausman et al. (US 20230311335 A1 hereinafter Hausman) in view of Shinohara (US 20200342872 A1 hereinafter Shinohara). Regarding Claim 1, Hausman teaches a method for ensuring safe execution of robotic commands by a robot ([0056] via “Robot 110 also includes one or more processors that, for example: process, using an LLM, an LLM prompt that is based on the FF NL input 105 to generate LLM output; determine, based on the LLM output, descriptions of robotic skills, and value function(s) for robotic skills, robotic skill(s) to implement in performing the robotic task; control the robot 110, during performance of the robotic task, based on the determined robotic skill(s); etc.”), the method comprising: receiving a description of an environment ([0061] via “In some implementations, the LLM engine 130 can optionally generate the LLM prompt 205A further based on one or more of scene descriptor(s) 202A of a current environment of the robot 110, prompt example(s) 203A, and/or an explanation 204A.”); forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment ([0060] via “In FIG. 2A, the LLM engine 130 generates an LLM prompt 205A based on the FF NL input 105 (“bring me a snack from the table”). The LLM engine 130 can generate the LLM prompt 205A such that it conforms strictly to the FF NL input 105 or can generate the LLM prompt 205A such that it is based on, but does not strictly conform to, the FF NL input 105. For example, as illustrated by LLM prompt 205A1, a non-limiting example of LLM prompt 205A, the LLM prompt can be “How would you bring me a snack from the table? I would 1.”.”), ([0061] via “In some implementations, the LLM engine 130 can optionally generate the LLM prompt 205A further based on one or more of scene descriptor(s) 202A of a current environment of the robot 110, prompt example(s) 203A, and/or an explanation 204A.”), and wherein the prompt is constituted into a computer readable storage area ([0059] via “Turning now to FIG. 2A, a process flow of how various example components can interact in selecting an initial robotic skill to implement responsive to the FF NL instruction 150 of FIG. 1A and in the environment of FIG. 1B. The example components, illustrated in FIG. 2A, include an LLM engine 130, an LLM 150, a task-grounding engine 132, a world-grounding engine 134, value function model(s), a selection engine 130, and an implementation engine 136. One or more of the illustrated components can be implemented by the robot 110 (e.g., utilizing processor(s) and/or memory thereof) and/or utilizing remote computing device(s) (e.g., cloud-based server(s)) that are in network communication with the robot 110.”), ([0064] via “The LLM engine 130 processes the generated LLM prompt 205A, using the LLM 150, to generate LLM output 206A.”), ([0109] via “These software modules are generally executed by processor 614 alone or in combination with other processors. Memory 625 used in the storage subsystem 624 can include a number of memories including a main random access memory (RAM) 630 for storage of instructions and data during program execution and a read only memory (ROM) 632 in which fixed instructions are stored.”); responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment ([0056] via “Robot 110 also includes one or more processors that, for example: process, using an LLM, an LLM prompt that is based on the FF NL input 105 to generate LLM output; determine, based on the LLM output, descriptions of robotic skills, and value function(s) for robotic skills, robotic skill(s) to implement in performing the robotic task; control the robot 110, during performance of the robotic task, based on the determined robotic skill(s); etc.”), ([0064] via “The LLM engine 130 processes the generated LLM prompt 205A, using the LLM 150, to generate LLM output 206A. As described herein, the LLM output 206A can model a probability distribution, over candidate word compositions, and is dependent on the LLM prompt 205A.”), ([0065] via “The task grounding engine 132 generates task-grounding measures 208A and generates the task-grounding measures 208A based on the LLM output 206A and skill descriptions 207. Each of the skill descriptions 207 is descriptive of a corresponding skill that the robot 110 is configured to perform. For example, “go to the table” can be descriptive of a “navigate to table” skill that the robot can perform by utilizing a trained navigation policy with a navigation target of “table” (or of a location corresponding to a “table”). As another example, “go to the sink” can be descriptive of a “navigate to sink” skill that the robot can perform by utilizing the trained navigation policy with a navigation target of “sink” (or of a location corresponding to a “sink”). As yet another example, “pick up a bottle” can be descriptive of a “grasp a bottle” skill that the robot can perform utilizing grasping heuristics fine-tuned to a bottle and/or using a trained grasping network.”). Hausman is silent on generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing the modified planning signals to the robot. However, Shinohara teaches generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response ([0030] via “The robot teaching device 30 identifies whether or not the voice-inputted phrase includes the recognition target word stored in the correspondence storage section 312 (step S13). When the voice-inputted phrase does not include the recognition target word (S13: No), the process returns to step S12. Here, “hand open” uttered by the operator OP is stored in the correspondence storage section 312 as the recognition target word. In this case, it is determined that the voice-inputted phrase includes the recognition target word (S13: Yes), and the process proceeds to step S14.”), (Note: The Examiner interprets the identification of whether the voice-inputted phrase includes a recognition target word or not (step S13) as the classification of the LLM response.); labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints ([0025] via “Table 1 indicated below shows an example of information stored in the correspondence storage section 312. In the example of Table 1, a recognition target word “hand open” is associated with an instruction HOP, a recognition target word “hand close” is associated with an instruction HCL, a recognition target word “plus X” is associated with an instruction PX, and a recognition target word “box open” is associated with an instruction BOP. Here, the respective instructions in Table 1 have the following meanings.”), ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), (Note: The Examiner interprets the speed of which the robot operates as a controllable condition safety constraint. The Examiner further interprets the specific instruction that each specific recognized target word corresponds to as the labeling. See Table 1 of Shinohara as well, wherein there are multiple recognition target words with corresponding instructions.); and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals ([0036] via “As an example, a case is assumed in which the first command is the instruction PX in Table 1. When the operator OP utters “plus X” and the first command is input to the robot teaching device 30 at a timing t1, the operating speed control section 332 operates once the robot 10 (arm tip) for a predetermined period of time at a speed V.sub.0 according to the first command, and then decelerates at a constant deceleration. … In a case where the number of uttering of “plus X” is one, the robot 10 operates in the speed control pattern of FIG. 5 and stops.”), ([0037] via “In a case where speed control is performed in a speed control pattern as illustrated in FIG. 5, by the operator OP uttering “plus X” in a short time interval as illustrated in FIG. 6A, the average movement speed of the robot 10 (V.sub.A1 in FIG. 6A) can be increased. FIG. 6B illustrates an operating speed of the robot 10 in a case where “plus X” is uttered with a longer time interval than that in FIG. 6A. As illustrated in FIG. 6B, by increasing the time interval at which the operator OP utters “plus X”, the average movement speed of the robot 10 (V.sub.A2 in FIG. 6B) becomes lower than V.sub.A1.”); and providing the modified planning signals to the robot ([0031] via “In a case where an operation permitting execution of the instruction is accepted (S15: Yes), the command execution signal output section 314 transmits a signal for executing the instruction to the robot controller 20 (step S16).”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein the method comprises: generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing the modified planning signals to the robot. Doing so operates the robot according to user input, such that user safety is taken into consideration by only modifying the program when understood target words are recognized, as stated by Shinohara ([0038] via “By performing the speed control described above, it is possible to avoid a situation in which the robot 10 continues to operate with a command by a single utterance, and achieve a movement that takes safety of the operator OP into consideration. Additionally, at the same time, the operator OP can operate the robot 10 at a desired speed by adjusting the frequency of the utterance.”). Regarding Claim 2, modified reference Hausman teaches the method of claim 1, but is silent on wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals. However, Shinohara teaches wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), (Note: The Examiner interprets the operating speed control section 332 as the supervisory agent.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals. Doing so operates the robot according to user input, such that user safety is taken into consideration by only modifying the program when understood target words are recognized, as stated by Shinohara ([0038] via “By performing the speed control described above, it is possible to avoid a situation in which the robot 10 continues to operate with a command by a single utterance, and achieve a movement that takes safety of the operator OP into consideration. Additionally, at the same time, the operator OP can operate the robot 10 at a desired speed by adjusting the frequency of the utterance.”). Regarding Claim 5, modified reference Hausman teaches the method of claim 1, wherein at least some portions of the LLM response are interpreted using at least one of, natural language processing, or image processing ([0051] via “Turning now to the Figures, FIG. 1A illustrates an example of a human 101 providing a free-form (FF) natural language (NL) instruction 105 of “bring me a snack from the table” to an example robot 110.”), ([0060] via “In FIG. 2A, the LLM engine 130 generates an LLM prompt 205A based on the FF NL input 105 (“bring me a snack from the table”). The LLM engine 130 can generate the LLM prompt 205A such that it conforms strictly to the FF NL input 105 or can generate the LLM prompt 205A such that it is based on, but does not strictly conform to, the FF NL input 105.”). Regarding Claim 6, modified reference Hausman teaches the method of claim 1, wherein the description of the environment is given in one or more of, a textualized natural language description, an audible natural language description, or an image ([0062] via “The scene descriptor(s) 202A can include NL descriptor(s) of object(s) currently or recently detected in the environment with the robot 110, such as descriptor(s) of object(s) determined based on processing image(s) or other vision data using object detection and classification machine learning model(s). For example, the scene descriptor(s) 202A can include “pear”, “keys”, “human”, “table”, “sink”, and “countertops” and the LLM engine 130 can generate the LLM prompt 205A to incorporate one or more of such descriptors.”). Regarding Claim 7, modified reference Hausman teaches the method of claim 1, further comprising forming a further large language model prompt to the artificial intelligence entity module, wherein the further large language model prompt is based at least in part on the LLM response ([0092] via “At block 362, the system determines whether a robotic skill was selected for implementation at block 360. If not, the system proceeds to block 364 and controlling the robot based on the FF NL instruction of block 352 is done. If so, the system proceeds to blocks 366 and 368.”), ([0093] via “At block 366, the system implements the selected robotic skill. At block 368, the system modifies the most recent LLM prompt, based on the skill description of the implemented skill. The system then proceeds back to block 354 and processes the LLM prompt as modified at block 368. The system also performs another iteration of blocks 356, 358, 360, and 362—and optionally blocks 366 and 368 (depending on the decision of block 362). This general process can continue until a termination condition is selected in an iteration of block 360.”), (Note: See Figure 3 of Hausman as well.). Regarding Claim 10, modified reference Hausman teaches the method of claim 1, further comprising requesting the LLM to produce a response that includes one or more safe operation limits or one or more controllable condition safety constraints ([0030] via “For example, TD-based methods can be used to learn a value function, such as a value function that is additionally conditioned on the natural language descriptor of a skill, and utilize the value function to determine whether a given skill is feasible from the given state. It is worth noting that in the undiscounted, sparse reward case, where the agent receives the reward of 1.0 at the end of the episode if it was successful and 0.0 otherwise, the value function trained via RL corresponds to an affordance function that specifies whether a skill is possible in a given state.”), ([0050] via “An example affordance function for a terminate skill can be to always set it to a small value, such as 0.1, represented by p.sub.terminate.sup.affordance=0.1. This can ensure the planning process terminates when there is no feasible skill to choose from.”), (Note: The Examiner interprets the feasibility/possibility of the skill to be performed as a safety response.). Regarding Claim 11, Hausman teaches a non-transitory computer readable medium having stored thereon a sequence of instructions which, when stored in memory and executed by one or more processors causes the one or more processors to perform ([0126] via “Other implementations can include a non-transitory computer readable storage medium storing instructions executable by one or more processor(s) (e.g., a central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s))) to perform a method such as one or more of the methods described herein.”) a set of acts for ensuring safe execution of robotic commands by a robot ([0056] via “Robot 110 also includes one or more processors that, for example: process, using an LLM, an LLM prompt that is based on the FF NL input 105 to generate LLM output; determine, based on the LLM output, descriptions of robotic skills, and value function(s) for robotic skills, robotic skill(s) to implement in performing the robotic task; control the robot 110, during performance of the robotic task, based on the determined robotic skill(s); etc.”), the set of acts comprising: receiving a description of an environment ([0061] via “In some implementations, the LLM engine 130 can optionally generate the LLM prompt 205A further based on one or more of scene descriptor(s) 202A of a current environment of the robot 110, prompt example(s) 203A, and/or an explanation 204A.”); forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment ([0060] via “In FIG. 2A, the LLM engine 130 generates an LLM prompt 205A based on the FF NL input 105 (“bring me a snack from the table”). The LLM engine 130 can generate the LLM prompt 205A such that it conforms strictly to the FF NL input 105 or can generate the LLM prompt 205A such that it is based on, but does not strictly conform to, the FF NL input 105. For example, as illustrated by LLM prompt 205A1, a non-limiting example of LLM prompt 205A, the LLM prompt can be “How would you bring me a snack from the table? I would 1.”.”), ([0061] via “In some implementations, the LLM engine 130 can optionally generate the LLM prompt 205A further based on one or more of scene descriptor(s) 202A of a current environment of the robot 110, prompt example(s) 203A, and/or an explanation 204A.”), and wherein the prompt is constituted into a computer readable storage area ([0059] via “Turning now to FIG. 2A, a process flow of how various example components can interact in selecting an initial robotic skill to implement responsive to the FF NL instruction 150 of FIG. 1A and in the environment of FIG. 1B. The example components, illustrated in FIG. 2A, include an LLM engine 130, an LLM 150, a task-grounding engine 132, a world-grounding engine 134, value function model(s), a selection engine 130, and an implementation engine 136. One or more of the illustrated components can be implemented by the robot 110 (e.g., utilizing processor(s) and/or memory thereof) and/or utilizing remote computing device(s) (e.g., cloud-based server(s)) that are in network communication with the robot 110.”), ([0064] via “The LLM engine 130 processes the generated LLM prompt 205A, using the LLM 150, to generate LLM output 206A.”), ([0109] via “These software modules are generally executed by processor 614 alone or in combination with other processors. Memory 625 used in the storage subsystem 624 can include a number of memories including a main random access memory (RAM) 630 for storage of instructions and data during program execution and a read only memory (ROM) 632 in which fixed instructions are stored.”); responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment ([0056] via “Robot 110 also includes one or more processors that, for example: process, using an LLM, an LLM prompt that is based on the FF NL input 105 to generate LLM output; determine, based on the LLM output, descriptions of robotic skills, and value function(s) for robotic skills, robotic skill(s) to implement in performing the robotic task; control the robot 110, during performance of the robotic task, based on the determined robotic skill(s); etc.”), ([0064] via “The LLM engine 130 processes the generated LLM prompt 205A, using the LLM 150, to generate LLM output 206A. As described herein, the LLM output 206A can model a probability distribution, over candidate word compositions, and is dependent on the LLM prompt 205A.”), ([0065] via “The task grounding engine 132 generates task-grounding measures 208A and generates the task-grounding measures 208A based on the LLM output 206A and skill descriptions 207. Each of the skill descriptions 207 is descriptive of a corresponding skill that the robot 110 is configured to perform. For example, “go to the table” can be descriptive of a “navigate to table” skill that the robot can perform by utilizing a trained navigation policy with a navigation target of “table” (or of a location corresponding to a “table”). As another example, “go to the sink” can be descriptive of a “navigate to sink” skill that the robot can perform by utilizing the trained navigation policy with a navigation target of “sink” (or of a location corresponding to a “sink”). As yet another example, “pick up a bottle” can be descriptive of a “grasp a bottle” skill that the robot can perform utilizing grasping heuristics fine-tuned to a bottle and/or using a trained grasping network.”). Hausman is silent on generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing or the modified planning signals to the robot. However, Shinohara teaches generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response ([0030] via “The robot teaching device 30 identifies whether or not the voice-inputted phrase includes the recognition target word stored in the correspondence storage section 312 (step S13). When the voice-inputted phrase does not include the recognition target word (S13: No), the process returns to step S12. Here, “hand open” uttered by the operator OP is stored in the correspondence storage section 312 as the recognition target word. In this case, it is determined that the voice-inputted phrase includes the recognition target word (S13: Yes), and the process proceeds to step S14.”), (Note: The Examiner interprets the identification of whether the voice-inputted phrase includes a recognition target word or not (step S13) as the classification of the LLM response.); labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints ([0025] via “Table 1 indicated below shows an example of information stored in the correspondence storage section 312. In the example of Table 1, a recognition target word “hand open” is associated with an instruction HOP, a recognition target word “hand close” is associated with an instruction HCL, a recognition target word “plus X” is associated with an instruction PX, and a recognition target word “box open” is associated with an instruction BOP. Here, the respective instructions in Table 1 have the following meanings.”), ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), (Note: The Examiner interprets the speed of which the robot operates as a controllable condition safety constraint. The Examiner further interprets the specific instruction that each specific recognized target word corresponds to as the labeling. See Table 1 of Shinohara as well, wherein there are multiple recognition target words with corresponding instructions.); and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals ([0036] via “As an example, a case is assumed in which the first command is the instruction PX in Table 1. When the operator OP utters “plus X” and the first command is input to the robot teaching device 30 at a timing t1, the operating speed control section 332 operates once the robot 10 (arm tip) for a predetermined period of time at a speed V.sub.0 according to the first command, and then decelerates at a constant deceleration. … In a case where the number of uttering of “plus X” is one, the robot 10 operates in the speed control pattern of FIG. 5 and stops.”), ([0037] via “In a case where speed control is performed in a speed control pattern as illustrated in FIG. 5, by the operator OP uttering “plus X” in a short time interval as illustrated in FIG. 6A, the average movement speed of the robot 10 (V.sub.A1 in FIG. 6A) can be increased. FIG. 6B illustrates an operating speed of the robot 10 in a case where “plus X” is uttered with a longer time interval than that in FIG. 6A. As illustrated in FIG. 6B, by increasing the time interval at which the operator OP utters “plus X”, the average movement speed of the robot 10 (V.sub.A2 in FIG. 6B) becomes lower than V.sub.A1.”); and providing or the modified planning signals to the robot ([0031] via “In a case where an operation permitting execution of the instruction is accepted (S15: Yes), the command execution signal output section 314 transmits a signal for executing the instruction to the robot controller 20 (step S16).”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein the set of acts comprises: generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing or the modified planning signals to the robot. Doing so operates the robot according to user input, such that user safety is taken into consideration by only modifying the program when understood target words are recognized, as stated by Shinohara ([0038] via “By performing the speed control described above, it is possible to avoid a situation in which the robot 10 continues to operate with a command by a single utterance, and achieve a movement that takes safety of the operator OP into consideration. Additionally, at the same time, the operator OP can operate the robot 10 at a desired speed by adjusting the frequency of the utterance.”). Regarding Claim 12, modified reference Hausman teaches the non-transitory computer readable medium of claim 11, but is silent on wherein classification of that at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals. However, Shinohara teaches wherein classification of that at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), (Note: The Examiner interprets the operating speed control section 332 as the supervisory agent.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein classification of that at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals. Doing so operates the robot according to user input, such that user safety is taken into consideration by only modifying the program when understood target words are recognized, as stated by Shinohara ([0038] via “By performing the speed control described above, it is possible to avoid a situation in which the robot 10 continues to operate with a command by a single utterance, and achieve a movement that takes safety of the operator OP into consideration. Additionally, at the same time, the operator OP can operate the robot 10 at a desired speed by adjusting the frequency of the utterance.”). Regarding Claim 15, modified reference Hausman teaches the non-transitory computer readable medium of claim 11, wherein at least some portions of the LLM response are interpreted using at least one of, natural language processing, or image processing ([0051] via “Turning now to the Figures, FIG. 1A illustrates an example of a human 101 providing a free-form (FF) natural language (NL) instruction 105 of “bring me a snack from the table” to an example robot 110.”), ([0060] via “In FIG. 2A, the LLM engine 130 generates an LLM prompt 205A based on the FF NL input 105 (“bring me a snack from the table”). The LLM engine 130 can generate the LLM prompt 205A such that it conforms strictly to the FF NL input 105 or can generate the LLM prompt 205A such that it is based on, but does not strictly conform to, the FF NL input 105.”). Regarding Claim 16, modified reference Hausman teaches the non-transitory computer readable medium of claim 11, wherein the description of the environment is given in one or more of, a textualized natural language description, an audible natural language description, or an image ([0062] via “The scene descriptor(s) 202A can include NL descriptor(s) of object(s) currently or recently detected in the environment with the robot 110, such as descriptor(s) of object(s) determined based on processing image(s) or other vision data using object detection and classification machine learning model(s). For example, the scene descriptor(s) 202A can include “pear”, “keys”, “human”, “table”, “sink”, and “countertops” and the LLM engine 130 can generate the LLM prompt 205A to incorporate one or more of such descriptors.”). Regarding Claim 17, modified reference Hausman teaches the non-transitory computer readable medium of claim 11, further comprising instructions which, when stored in memory and executed by the one or more processors causes the one or more processors to perform acts of ([0126] via “Other implementations can include a non-transitory computer readable storage medium storing instructions executable by one or more processor(s) (e.g., a central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s))) to perform a method such as one or more of the methods described herein.”) forming a further large language model prompt to the artificial intelligence entity module, wherein the further large language model prompt is based at least in part on the LLM response ([0092] via “At block 362, the system determines whether a robotic skill was selected for implementation at block 360. If not, the system proceeds to block 364 and controlling the robot based on the FF NL instruction of block 352 is done. If so, the system proceeds to blocks 366 and 368.”), ([0093] via “At block 366, the system implements the selected robotic skill. At block 368, the system modifies the most recent LLM prompt, based on the skill description of the implemented skill. The system then proceeds back to block 354 and processes the LLM prompt as modified at block 368. The system also performs another iteration of blocks 356, 358, 360, and 362—and optionally blocks 366 and 368 (depending on the decision of block 362). This general process can continue until a termination condition is selected in an iteration of block 360.”). Regarding Claim 19, Hausman teaches a system for ensuring safe execution of robotic commands by a robot ([0056] via “Robot 110 also includes one or more processors that, for example: process, using an LLM, an LLM prompt that is based on the FF NL input 105 to generate LLM output; determine, based on the LLM output, descriptions of robotic skills, and value function(s) for robotic skills, robotic skill(s) to implement in performing the robotic task; control the robot 110, during performance of the robotic task, based on the determined robotic skill(s); etc.”), the system comprising: a storage medium having stored thereon a sequence of instructions; and one or more processors that execute the sequence of instructions to cause the one or more processors to perform a set of acts ([0126] via “Other implementations can include a non-transitory computer readable storage medium storing instructions executable by one or more processor(s) (e.g., a central processing unit(s) (CPU(s)), graphics processing unit(s) (GPU(s)), and/or tensor processing unit(s) (TPU(s))) to perform a method such as one or more of the methods described herein.”), the set of acts comprising, receiving a description of an environment ([0061] via “In some implementations, the LLM engine 130 can optionally generate the LLM prompt 205A further based on one or more of scene descriptor(s) 202A of a current environment of the robot 110, prompt example(s) 203A, and/or an explanation 204A.”); forming a large language model (LLM) prompt to an artificial intelligence entity module, wherein the prompt is based at least in part on the description of the environment ([0060] via “In FIG. 2A, the LLM engine 130 generates an LLM prompt 205A based on the FF NL input 105 (“bring me a snack from the table”). The LLM engine 130 can generate the LLM prompt 205A such that it conforms strictly to the FF NL input 105 or can generate the LLM prompt 205A such that it is based on, but does not strictly conform to, the FF NL input 105. For example, as illustrated by LLM prompt 205A1, a non-limiting example of LLM prompt 205A, the LLM prompt can be “How would you bring me a snack from the table? I would 1.”.”), ([0061] via “In some implementations, the LLM engine 130 can optionally generate the LLM prompt 205A further based on one or more of scene descriptor(s) 202A of a current environment of the robot 110, prompt example(s) 203A, and/or an explanation 204A.”), and wherein the prompt is constituted into a computer readable storage area ([0059] via “Turning now to FIG. 2A, a process flow of how various example components can interact in selecting an initial robotic skill to implement responsive to the FF NL instruction 150 of FIG. 1A and in the environment of FIG. 1B. The example components, illustrated in FIG. 2A, include an LLM engine 130, an LLM 150, a task-grounding engine 132, a world-grounding engine 134, value function model(s), a selection engine 130, and an implementation engine 136. One or more of the illustrated components can be implemented by the robot 110 (e.g., utilizing processor(s) and/or memory thereof) and/or utilizing remote computing device(s) (e.g., cloud-based server(s)) that are in network communication with the robot 110.”), ([0064] via “The LLM engine 130 processes the generated LLM prompt 205A, using the LLM 150, to generate LLM output 206A.”), ([0109] via “These software modules are generally executed by processor 614 alone or in combination with other processors. Memory 625 used in the storage subsystem 624 can include a number of memories including a main random access memory (RAM) 630 for storage of instructions and data during program execution and a read only memory (ROM) 632 in which fixed instructions are stored.”); responsive to prompting the artificial intelligence entity module, receiving an LLM response that contains a dimension, or a limit, or a safety rating pertaining to aspects of the environment ([0056] via “Robot 110 also includes one or more processors that, for example: process, using an LLM, an LLM prompt that is based on the FF NL input 105 to generate LLM output; determine, based on the LLM output, descriptions of robotic skills, and value function(s) for robotic skills, robotic skill(s) to implement in performing the robotic task; control the robot 110, during performance of the robotic task, based on the determined robotic skill(s); etc.”), ([0064] via “The LLM engine 130 processes the generated LLM prompt 205A, using the LLM 150, to generate LLM output 206A. As described herein, the LLM output 206A can model a probability distribution, over candidate word compositions, and is dependent on the LLM prompt 205A.”), ([0065] via “The task grounding engine 132 generates task-grounding measures 208A and generates the task-grounding measures 208A based on the LLM output 206A and skill descriptions 207. Each of the skill descriptions 207 is descriptive of a corresponding skill that the robot 110 is configured to perform. For example, “go to the table” can be descriptive of a “navigate to table” skill that the robot can perform by utilizing a trained navigation policy with a navigation target of “table” (or of a location corresponding to a “table”). As another example, “go to the sink” can be descriptive of a “navigate to sink” skill that the robot can perform by utilizing the trained navigation policy with a navigation target of “sink” (or of a location corresponding to a “sink”). As yet another example, “pick up a bottle” can be descriptive of a “grasp a bottle” skill that the robot can perform utilizing grasping heuristics fine-tuned to a bottle and/or using a trained grasping network.”). Hausman is silent on generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing the modified planning signals to the robot. However, Shinohara teaches generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response ([0030] via “The robot teaching device 30 identifies whether or not the voice-inputted phrase includes the recognition target word stored in the correspondence storage section 312 (step S13). When the voice-inputted phrase does not include the recognition target word (S13: No), the process returns to step S12. Here, “hand open” uttered by the operator OP is stored in the correspondence storage section 312 as the recognition target word. In this case, it is determined that the voice-inputted phrase includes the recognition target word (S13: Yes), and the process proceeds to step S14.”), (Note: The Examiner interprets the identification of whether the voice-inputted phrase includes a recognition target word or not (step S13) as the classification of the LLM response.); labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints ([0025] via “Table 1 indicated below shows an example of information stored in the correspondence storage section 312. In the example of Table 1, a recognition target word “hand open” is associated with an instruction HOP, a recognition target word “hand close” is associated with an instruction HCL, a recognition target word “plus X” is associated with an instruction PX, and a recognition target word “box open” is associated with an instruction BOP. Here, the respective instructions in Table 1 have the following meanings.”), ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), (Note: The Examiner interprets the speed of which the robot operates as a controllable condition safety constraint. The Examiner further interprets the specific instruction that each specific recognized target word corresponds to as the labeling. See Table 1 of Shinohara as well, wherein there are multiple recognition target words with corresponding instructions.); and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals ([0036] via “As an example, a case is assumed in which the first command is the instruction PX in Table 1. When the operator OP utters “plus X” and the first command is input to the robot teaching device 30 at a timing t1, the operating speed control section 332 operates once the robot 10 (arm tip) for a predetermined period of time at a speed V.sub.0 according to the first command, and then decelerates at a constant deceleration. … In a case where the number of uttering of “plus X” is one, the robot 10 operates in the speed control pattern of FIG. 5 and stops.”), ([0037] via “In a case where speed control is performed in a speed control pattern as illustrated in FIG. 5, by the operator OP uttering “plus X” in a short time interval as illustrated in FIG. 6A, the average movement speed of the robot 10 (V.sub.A1 in FIG. 6A) can be increased. FIG. 6B illustrates an operating speed of the robot 10 in a case where “plus X” is uttered with a longer time interval than that in FIG. 6A. As illustrated in FIG. 6B, by increasing the time interval at which the operator OP utters “plus X”, the average movement speed of the robot 10 (V.sub.A2 in FIG. 6B) becomes lower than V.sub.A1.”); and providing the modified planning signals to the robot ([0031] via “In a case where an operation permitting execution of the instruction is accepted (S15: Yes), the command execution signal output section 314 transmits a signal for executing the instruction to the robot controller 20 (step S16).”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein the set of acts comprises: generating a plan for safe operation of the robot by: classifying at least some portions of the LLM response; labeling at least some sub-portions of the LLM response, wherein the sub-portions correspond to one or more controllable condition safety constraints; and based on the labeling, modifying at least some of the sub-portions to generate modified planning signals; and providing the modified planning signals to the robot. Doing so operates the robot according to user input, such that user safety is taken into consideration by only modifying the program when understood target words are recognized, as stated by Shinohara ([0038] via “By performing the speed control described above, it is possible to avoid a situation in which the robot 10 continues to operate with a command by a single utterance, and achieve a movement that takes safety of the operator OP into consideration. Additionally, at the same time, the operator OP can operate the robot 10 at a desired speed by adjusting the frequency of the utterance.”). Regarding Claim 20, modified reference Hausman teaches the system of claim 19, but is silent on wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals. However, Shinohara teaches wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), (Note: The Examiner interprets the operating speed control section 332 as the supervisory agent.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein classification of the at least some portions of the LLM response is carried out by a supervisory agent that receives robotic planning signals and produces the modified planning signals. Doing so operates the robot according to user input, such that user safety is taken into consideration by only modifying the program when understood target words are recognized, as stated by Shinohara ([0038] via “By performing the speed control described above, it is possible to avoid a situation in which the robot 10 continues to operate with a command by a single utterance, and achieve a movement that takes safety of the operator OP into consideration. Additionally, at the same time, the operator OP can operate the robot 10 at a desired speed by adjusting the frequency of the utterance.”). Regarding Claim 21, modified reference Hausman teaches the method of claim 1, but is silent on wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle. However, Shinohara teaches wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), ([0037] via “In a case where speed control is performed in a speed control pattern as illustrated in FIG. 5, by the operator OP uttering “plus X” in a short time interval as illustrated in FIG. 6A, the average movement speed of the robot 10 (V.sub.A1 in FIG. 6A) can be increased. FIG. 6B illustrates an operating speed of the robot 10 in a case where “plus X” is uttered with a longer time interval than that in FIG. 6A. As illustrated in FIG. 6B, by increasing the time interval at which the operator OP utters “plus X”, the average movement speed of the robot 10 (V.sub.A2 in FIG. 6B) becomes lower than V.sub.A1.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle. Doing so controls the average speed output by the robot based on the user’s desired input, as stated above by Shinohara in both citations. Regarding Claim 22, modified reference Hausman teaches the computer readable medium of claim 11, but is silent on wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle. However, Shinohara teaches wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), ([0037] via “In a case where speed control is performed in a speed control pattern as illustrated in FIG. 5, by the operator OP uttering “plus X” in a short time interval as illustrated in FIG. 6A, the average movement speed of the robot 10 (V.sub.A1 in FIG. 6A) can be increased. FIG. 6B illustrates an operating speed of the robot 10 in a case where “plus X” is uttered with a longer time interval than that in FIG. 6A. As illustrated in FIG. 6B, by increasing the time interval at which the operator OP utters “plus X”, the average movement speed of the robot 10 (V.sub.A2 in FIG. 6B) becomes lower than V.sub.A1.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle. Doing so controls the average speed output by the robot based on the user’s desired input, as stated above by Shinohara in both citations. Regarding Claim 23, modified reference Hausman teaches the system of claim 19, but is silent on wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle. However, Shinohara teaches wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle ([0035] via “As illustrated in FIG. 2, the command execution signal output section 314 may include an operating speed control section 332 configured to, when the recognition target word (hereinafter, also referred to as a first recognition target word) associated with a command to operate the robot 10 by the voice input (hereinafter, also referred to as a first command) is continuously determined to be included in the phrase by the recognition target word determination section 313, generate a signal for executing the first command such that an average operating speed of the robot 10 operated by the first command changes in accordance with frequency in which the first recognition target word is continuously determined to be included in the phrase by the recognition target word determination section 313.”), ([0037] via “In a case where speed control is performed in a speed control pattern as illustrated in FIG. 5, by the operator OP uttering “plus X” in a short time interval as illustrated in FIG. 6A, the average movement speed of the robot 10 (V.sub.A1 in FIG. 6A) can be increased. FIG. 6B illustrates an operating speed of the robot 10 in a case where “plus X” is uttered with a longer time interval than that in FIG. 6A. As illustrated in FIG. 6B, by increasing the time interval at which the operator OP utters “plus X”, the average movement speed of the robot 10 (V.sub.A2 in FIG. 6B) becomes lower than V.sub.A1.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Shinohara wherein the one or more controllable condition safety constraints are one of, a speed limit, or a distance from any obstacle. Doing so controls the average speed output by the robot based on the user’s desired input, as stated above by Shinohara in both citations. 12. Claim(s) 3, 4, 13, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hausman et al. (US 20230311335 A1 hereinafter Hausman) in view of Shinohara (US 20200342872 A1 hereinafter Shinohara), and further in view of Scott et al. (US 20220024486 A1 hereinafter Scott). Regarding Claim 3, modified reference Hausman teaches the method of claim 2, but is silent on the method further comprising labeling portions of the robotic planning signals with a label that carries semantics of at least one of, a “do not approach” semantic, an “actively avoid” semantic, or a collision tolerant subrange. However, Scott teaches labeling portions of the robotic planning signals with a label that carries semantics of at least one of, a “do not approach” semantic, an “actively avoid” semantic, or a collision tolerant subrange ([0086] via “By way of further example, output data of the camera module 94 of the sensing subsystem 86 is processed by the machine learning module 104, implemented as a neural network. More specifically, the machine learning module 104 is a neural network based classifier.”), ([0109] via “As should be appreciated by those skilled in the art any of the objects in either of the fields of view of FIGS. 15 and 16 represents a potential obstacle to the robot's movement. To ensure that any potential obstacle is handled appropriately, the path and motion planning module 120 includes an obstacle handler submodule configured for detecting potential obstacles in the path being taken by the robot, for classifying those detected obstacles as being either traversable, avoidable, or non-traversable, and for causing the robot to move along its chosen path without change if a detected obstacle is traversable.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Scott wherein the method further comprises labeling portions of the robotic planning signals with a label that carries semantics of at least one of, a “do not approach” semantic, an “actively avoid” semantic, or a collision tolerant subrange. Doing so ensures that any obstacle encountered by the robot is appropriately handled, as stated above by Scott in paragraph [0109]. Regarding Claim 4, modified reference Hausman teaches the method of claim 2, but is silent on wherein at least some portions of the LLM response are interpreted as referring to a false object that is deemed to be collision tolerant. However, Scott teaches wherein at least some portions of the LLM response are interpreted as referring to a false object that is deemed to be collision tolerant ([0086] via “By way of further example, output data of the camera module 94 of the sensing subsystem 86 is processed by the machine learning module 104, implemented as a neural network. More specifically, the machine learning module 104 is a neural network based classifier.”), ([0109] via “As should be appreciated by those skilled in the art any of the objects in either of the fields of view of FIGS. 15 and 16 represents a potential obstacle to the robot's movement. To ensure that any potential obstacle is handled appropriately, the path and motion planning module 120 includes an obstacle handler submodule configured for detecting potential obstacles in the path being taken by the robot, for classifying those detected obstacles as being either traversable, avoidable, or non-traversable, and for causing the robot to move along its chosen path without change if a detected obstacle is traversable.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Scott wherein at least some portions of the LLM response are interpreted as referring to a false object that is deemed to be collision tolerant. Doing so ensures that the traversable obstacle encountered by the robot is appropriately handled, allowing the robot to continue along its initial path, as stated above by Scott in paragraph [0109]. Regarding Claim 13, modified reference Hausman teaches the non-transitory computer readable medium of claim 12, but is silent on the non-transitory computer readable medium further comprising instructions which, when stored in memory and executed by the one or more processors causes the one or more processors to perform acts of labeling portions of the robotic planning signals with a label that carries semantics of at least one of, a “do not approach” semantic, an “actively avoid” semantic, or a collision tolerant subrange. However, Scott teaches labeling portions of the robotic planning signals with a label that carries semantics of at least one of, a “do not approach” semantic, an “actively avoid” semantic, or a collision tolerant subrange ([0086] via “By way of further example, output data of the camera module 94 of the sensing subsystem 86 is processed by the machine learning module 104, implemented as a neural network. More specifically, the machine learning module 104 is a neural network based classifier.”), ([0109] via “As should be appreciated by those skilled in the art any of the objects in either of the fields of view of FIGS. 15 and 16 represents a potential obstacle to the robot's movement. To ensure that any potential obstacle is handled appropriately, the path and motion planning module 120 includes an obstacle handler submodule configured for detecting potential obstacles in the path being taken by the robot, for classifying those detected obstacles as being either traversable, avoidable, or non-traversable, and for causing the robot to move along its chosen path without change if a detected obstacle is traversable.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Scott wherein the non-transitory computer readable medium further comprises instructions which, when stored in memory and executed by the one or more processors causes the one or more processors to perform acts of labeling portions of the robotic planning signals with a label that carries semantics of at least one of, a “do not approach” semantic, an “actively avoid” semantic, or a collision tolerant subrange. Doing so ensures that any obstacle encountered by the robot is appropriately handled, as stated above by Scott in paragraph [0109]. Regarding Claim 14, modified reference Hausman teaches the non-transitory computer readable medium of claim 12, but is silent on wherein at least some portions of the LLM response are interpreted as referring to a false object that is deemed to be collision tolerant. However, Scott teaches wherein at least some portions of the LLM response are interpreted as referring to a false object that is deemed to be collision tolerant ([0086] via “By way of further example, output data of the camera module 94 of the sensing subsystem 86 is processed by the machine learning module 104, implemented as a neural network. More specifically, the machine learning module 104 is a neural network based classifier.”), ([0109] via “As should be appreciated by those skilled in the art any of the objects in either of the fields of view of FIGS. 15 and 16 represents a potential obstacle to the robot's movement. To ensure that any potential obstacle is handled appropriately, the path and motion planning module 120 includes an obstacle handler submodule configured for detecting potential obstacles in the path being taken by the robot, for classifying those detected obstacles as being either traversable, avoidable, or non-traversable, and for causing the robot to move along its chosen path without change if a detected obstacle is traversable.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Scott wherein at least some portions of the LLM response are interpreted as referring to a false object that is deemed to be collision tolerant. Doing so ensures that the traversable obstacle encountered by the robot is appropriately handled, allowing the robot to continue along its initial path, as stated above by Scott in paragraph [0109]. 13. Claim(s) 8, 9, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hausman et al. (US 20230311335 A1 hereinafter Hausman) in view of Shinohara (US 20200342872 A1 hereinafter Shinohara), and further in view of Ranzinger (US 20250029206 A1 hereinafter Ranzinger). Regarding Claim 8, modified reference Hausman teaches the method of claim 1, but is silent on the method further comprising forming a further large language model prompt to a further artificial intelligence entity module, and wherein the further large language model prompt to the further artificial intelligence entity module comprises at least one image. However, Ranzinger teaches forming a further large language model prompt to a further artificial intelligence entity module, and wherein the further large language model prompt to the further artificial intelligence entity module comprises at least one image ([0056] via “In at least one embodiment, said natural language query is input as text to said LLM. … In at least one embodiment, a user inputs a natural language query along with a high-resolution input, such that a processor performing a neural network implements a processing task on said high-resolution input according to said query. In at least on embodiment, a user inputs a natural language query to said LLM in order to parse and learn a user's requested task. For example, if a user were to input “How many people are in this picture?” as a query along with a high-resolution input image, then said LLM would analyze said query to understand that said user requests an image recognition task and would inform a processor to perform said recognition task on said high-resolution input image.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ranzinger wherein the method further comprises forming a further large language model prompt to a further artificial intelligence entity module, and wherein the further large language model prompt to the further artificial intelligence entity module comprises at least one image. Doing so provides the further artificial intelligence entity module with additional information in order to allow the processor to better determine the requested task from the user, as stated above by Ranzinger. Regarding Claim 9, modified reference Hausman teaches the method of claim 8, but is silent on wherein the further artificial intelligence entity module operates in an image mode. However, Ranzinger teaches wherein the further artificial intelligence entity module operates in an image mode ([0056] via “In at least one embodiment, said natural language query is input as text to said LLM. … In at least one embodiment, a user inputs a natural language query along with a high-resolution input, such that a processor performing a neural network implements a processing task on said high-resolution input according to said query. In at least on embodiment, a user inputs a natural language query to said LLM in order to parse and learn a user's requested task. For example, if a user were to input “How many people are in this picture?” as a query along with a high-resolution input image, then said LLM would analyze said query to understand that said user requests an image recognition task and would inform a processor to perform said recognition task on said high-resolution input image.”), (Note: The Examiner interprets the image input to the LLM and the analysis of said image input as the artificial intelligence entity operating in an image mode, as the image mode is described in paragraph [0073] of the specification of the instant application.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ranzinger wherein the further artificial intelligence entity module operates in an image mode. Doing so provides the further artificial intelligence entity module with additional information in order to allow the processor to better determine the requested task from the user, as stated above by Ranzinger. Regarding Claim 18, modified reference Hausman teaches the non-transitory computer readable medium of claim 11, but is silent on the non-transitory computer readable medium further comprising instructions which, when stored in memory and executed by the one or more processors causes the one or more processors to perform acts of forming a further large language model prompt to a further artificial intelligence entity module, and wherein the further large language model prompt to the further artificial intelligence entity module comprises at least one image. However, Ranzinger teaches forming a further large language model prompt to a further artificial intelligence entity module, and wherein the further large language model prompt to the further artificial intelligence entity module comprises at least one image ([0056] via “In at least one embodiment, said natural language query is input as text to said LLM. … In at least one embodiment, a user inputs a natural language query along with a high-resolution input, such that a processor performing a neural network implements a processing task on said high-resolution input according to said query. In at least on embodiment, a user inputs a natural language query to said LLM in order to parse and learn a user's requested task. For example, if a user were to input “How many people are in this picture?” as a query along with a high-resolution input image, then said LLM would analyze said query to understand that said user requests an image recognition task and would inform a processor to perform said recognition task on said high-resolution input image.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ranzinger wherein the non-transitory computer readable medium further comprises instructions which, when stored in memory and executed by the one or more processors causes the one or more processors to perform acts of forming a further large language model prompt to a further artificial intelligence entity module, and wherein the further large language model prompt to the further artificial intelligence entity module comprises at least one image. Doing so provides the further artificial intelligence entity with additional information in order to allow the processor to better determine the requested task from the user, as stated above by Ranzinger. Examiner’s Note 14. The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123. Conclusion 15. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 16. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BYRON X KASPER whose telephone number is (571)272-3895. The examiner can normally be reached Monday - Friday 8 am - 5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached on (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BYRON XAVIER KASPER/Examiner, Art Unit 3657 /ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Aug 05, 2025
Non-Final Rejection — §101, §103
Dec 09, 2025
Response Filed
Jan 30, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594964
METHOD OF AND SYSTEM FOR GENERATING REFERENCE PATH OF SELF DRIVING CAR (SDC)
2y 5m to grant Granted Apr 07, 2026
Patent 12594137
HARD STOP PROTECTION SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12583101
METHOD FOR OPERATING A MODULAR ROBOT, MODULAR ROBOT, COLLISION AVOIDANCE SYSTEM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 24, 2026
Patent 12576529
ROBOT SIMULATION DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12564962
ROBOT REMOTE OPERATION CONTROL DEVICE, ROBOT REMOTE OPERATION CONTROL SYSTEM, ROBOT REMOTE OPERATION CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
88%
With Interview (+18.4%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 103 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month