Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
The amendments to the independent claims have necessitated new grounds of rejection and the following prior rejections are withdrawn:
Claim(s) 1, 4-8 and 10-11, 14, 15, and 18-20 rejected under 35 U.S.C. 103 as being unpatentable over Hu et al (“Collaborative Motion Prediction via Neural Motion Message Passing”, publisher: Computer Vision Foundation, published: June 2020, pages 6319-6328) in view of Mahjourian et al (US Application: US 2022/0135086, published: May 5, 2022, filed: Oct. 29, 2021).
Claim(s) 2, 3, 12, 13, 16, and 17 rejected under 35 U.S.C. 103 as being unpatentable over Hu et al (“Collaborative Motion Prediction via Neural Motion Message Passing”, publisher: Computer Vision Foundation, published: June 2020, pages 6319-6328) in view of Mahjourian et al (US Application: US 2022/0135086, published: May 5, 2022, filed: Oct. 29, 2021) in view of Engstrom et al (US Patent: 11447142, issued: Sep. 20, 2022, filed: May 16, 2019).
Claim(s) 9 rejected under 35 U.S.C. 103 as being unpatentable over Hu et al (“Collaborative Motion Prediction via Neural Motion Message Passing”, publisher: Computer Vision Foundation, published: June 2020, pages 6319-6328) in view of Mahjourian et al (US Application: US 2022/0135086, published: May 5, 2022, filed: Oct. 29, 2021) in view of Naghshvar et al (US Application: US 20200150672, published: May 14, 2020, filed: Nov. 13, 2019).
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/23/2025 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/09/2026 is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 – 101 Analysis:
Claim 1 is directed to a system which performs extraction of spatio-temporal features to infer states of one or more agents and predicting future behaviors for the one or more agents to calculate one or more interactivity scores.
101 Analysis Step 2A, Prong One
Claim 1 recites the following limitations (of which bolded limitations constitute a ‘mental process’ that covers performance of the limitations in the human mind using observation, evaluation, judgment and opinion) and the underlined limitations are interpreted as intended-use.
A system for navigation based on internal state inference and interactivity estimation, comprising: a memory storing one or more instructions; and a processor executing one or more of the instructions stored on the memory to perform training a policy for autonomous navigation by: extracting spatio-temporal features from one or more historical observations of two or more agents within a simulation environment including an ego-agent and one or more non-ego-agents; analyzing the spatio-temporal features to infer one or more internal states of one or more of the agents; as a first set of predicted trajectory distributions predicting one or more future behaviors for one or more of the one or more of the agents in a first scenario including an existence of the ego-agent within the simulation environment based on the spatio-temporal features from the ego-agent; and as a second set of predicted trajectory distributions in a second scenario excluding the existence of the ego- agent within the simulation environment based on spatio-temporal features from one or more of the non-ego-agents and no spatio-temporal features from the ego-agent; calculating one or more interactivity scores for one or more of the agents based on a difference between the first set of predicted trajectory distributions from the second set of predicted trajectory distributions from the second scenario, wherein the one or more interactivity scores are used as weights of prediction errors for the first and second set of predicted trajectory distributions; controlling the ego-agent based on one or more of the interactivity scores between the ego-agent and one or more of the non-ego-agents.
More specifically, a person can observe spatio-temporal features and evaluate the features to make a judgement about future behaviors, and then make further evaluations of scenarios and judge interactive scores.
101 Analysis Step 2A, Prong Two
With regards to the additional elements of :
“a memory storing one or more instructions and a processor executing one or more of the instructions stored on the memory to perform training a policy for autonomous navigation …”, these additional elements amount to recitation of a computer to perform the limitations of the method and amount to no more than mere instructions to apply the exception using generic computer component(s) and therefore fails to provide an improvement to the technology or technical field. The courts have identified using the words ‘apply it’ (or an equivalent) with the judicial exception to be insufficient to integrating a judicial exception into a practical application (does not integrate a judicial exception in to a practical application).
“controlling the ego-agent based on one or more of the interactivity scores between the ego-agent and one or more of the non-ego-agents”, these additional elements also amount to no more than mere instructions to apply the exception where the ‘ego’ agent could be a generic computer component that is used to apply an instructions to execute ‘control’ instructions (such as control-navigation instructions). It is noted that the ‘controlling’ in the claim does not limit or describe what aspect of the ego-agent is being controlled, and thus it can be interpreted that the ego-agent is directed to execute a type of instruction (such as a ‘control’ type instruction or a control-navigation type instruction). The courts have identified using the words ‘apply it’ (or an equivalent) with the judicial exception to be insufficient to integrating a judicial exception into a practical application (does not integrate a judicial exception in to a practical application).
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claim is directed to the judicial exception.
101 Analysis Step 2B
As explained with respect above in Step 2A, Prong Two, there are the additional elements of:
“a memory storing one or more instructions and a processor executing one or more of the instructions stored on the memory to perform training a policy for autonomous navigation …”, and “controlling the ego-agent based on one or more of the interactivity scores between the ego-agent and one or more of the non-ego-agents”, and these additional elements were explained in Step 2A, Prong two to be merely ‘apply it’ (or an equivalent) with the judicial exception using generic computer/generic computer components. The courts have found these types of limitations to be insufficient to qualify as ‘significantly more’ when recited in a claim with a judicial exception (see Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984)
Thus, the additional elements are not considered significantly more than the recited exception and also do not provide an inventive concept.
101 Analysis of claims 2-10
Dependent claims 2-10 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception that do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. Therefore, dependent claims 2-10 are not patent eligible under the same rationale as claim 1.
101 Analysis of claim 11
Claim 11 is rejected under similar rationale as claim 1 (as it is an independent claim that is broader than claim 1 without generic computer components).
101 Analysis of claims 12-14
Dependent claims 12-14 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception that do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception. Therefore, dependent claims 12-14 are not patent eligible under the same rationale as claim 11.
101 Analysis of claim 15
Claim 15 is rejected under similar rationale as claim 1. It is noted that claim 15 additionally recites ‘a controller controlling the of the autonomous vehicle navigation based on internal state inference and interactivity estimation autonomous vehicle according to the policy for autonomous navigation and inputs from a vehicle sensor”.
101 Analysis Step 2A- Prong I: Here, the bolded items above constitute a ‘mental process’ that covers performance of the limitations in the human mind using observation, evaluation, judgment and opinion (a person can make a judgement about a navigation decision based upon evaluation of state inference and interactivity estimation, according to policy data and input sensor data. It is noted, the recitation of ‘controlling the navigation’, when broadly interpreted, can be interpreted as a mental process of making a judgement of a control action/decision, and should the applicant have intended the control to have manipulated specific navigation entities/navigation components of the autonomous vehicle that alter/change navigational movement of the autonomous vehicle, then the examiner suggests the applicant consider making this type of clarification (in the interest of distinguishing from a mental process).
101 Analysis Step 2A- Prong II:
With regards to the underlined, the additional elements of:
‘controller controlling the navigation of the autonomous vehicle’, amounts to recitation of a computer/controller to perform the limitations of the method and amount to no more than mere instructions to apply the exception using generic computer component(s) and therefore fails to provide an improvement to the technology or technical field. It is noted that the ‘controlling’ in the claim does not limit or describe what aspect of the autonomous vehicle is being controlled, and thus it can be interpreted that the controller is directed to execute a type of instruction (such as a ‘control’ type instruction or a control-navigation type instruction). The courts have identified using the words ‘apply it’ (or an equivalent) with the judicial exception to be insufficient to integrating a judicial exception into a practical application (does not integrate a judicial exception in to a practical application).
‘input from a vehicle sensor’, amounts to mere data gathering (i.e: Obtaining information about transactions using the Internet to verify credit card transactions, CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011)), which is a form of insignificant extra solution activity. The courts have identified adding insignificant extra solution activit(ies) to the judicial exception to be insufficient to integrating a judicial exception into a practical application (does not integrate a judicial exception in to a practical application).
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claim is directed to the judicial exception.
101 Analysis Step 2B
As explained with respect above in Step 2A, Prong Two, there are the additional elements of:
“…controller controlling the navigation of the autonomous vehicle”, this additional element was explained in Step 2A, Prong two to be merely ‘apply it’ (or an equivalent) with the judicial exception using generic computer/generic computer component(s). The courts have found these types of limitations to be insufficient to qualify as ‘significantly more’ when recited in a claim with a judicial exception (see Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984)
“… inputs from a vehicle sensor …”, these additional elements were explained in 2A, Prong two to be insignificant extra solution activity and the courts have found these types of limitations to be insufficient to qualify as ‘significantly more’ when recited in a claim with a judicial exception. (e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011)).
Thus, the additional elements are not considered significantly more than the recited exception and also do not provide an inventive concept.
101 Analysis of claims 16-20
Dependent claims 16-20 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception that do not integrate the judici9al exception into a practical application nor amount to significantly more than the judicial exception. Therefore, dependent claims 16-20 are not patent eligible under the same rationale as claim 15.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-8 and 10-11, 14, 15, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al (“Collaborative Motion Prediction via Neural Motion Message Passing”, publisher: Computer Vision Foundation, published: June 2020, pages 6319-6328) in view of Mahjourian et al (US Application: US 2022/0135086, published: May 5, 2022, filed: Oct. 29, 2021) in view of Kim et al (“Driving Style-Based Conditional Variational Autoencoder for Prediction of Ego Vehicle Trajectory”, published: Dec. 24, 2021, publisher: IEEE Access, pages 169348-169356).
With regards to claim 1. Hu et al teaches a system for navigation based on internal state inference and interactivity estimation, comprising:
to perform training a policy for autonomous navigation by:
extracting spatio-temporal features from one or more historical observations of two or more agents within a simulation environment including an ego-agent and one or more non-ego agents (page 6321, left column and 6322: observation values are feature values (associated with spatial positioning and trajectory) for a plurality of agents/actors in an environment. Some actors include an ego agent (such as vehicle) and non-ego agent(s) (such as pedestrians));
analyzing the spatio-temporal features to infer one or more internal states of one or more of the agents (page 6322, left column: internal hidden states for one or more actors (agents) are inferred (q(t));
predicting one or more future behaviors for one or more of the agents: … in a first scenario including an existence of the ego-agent within the simulation environment … (page 6322, left column, page 6323, left column: an interactive scenario for more an ego agent such as a vehicle along with another agent (pedestrian) is processed to determine future trajector(ies)/behaviors) and … in a second scenario excluding the existence of the ego-agent within the simulation environment (page 6322, left column: an individual scenario without existence of another actor (agent) is processed to help predict future trajectory(ies)/behaviors);
However Hu et al does not expressly teach … a memory storing one or more instructions; and a processor executing one or more of the instructions stored on the memory …; as a first set of predicted trajectory distributions … based on the spatio-temporal features from the ego-agent; and as a second set of predicted trajectory distributions … based on spatio-temporal features from one or more of the non-ego agents and no spatio-temporal features from the ego-agent; and calculating one or more interactivity scores for one or more of the agents based on a difference between the first set of predicted trajectory distributions from the first scenario and the second set of predicted trajectory distributions from the second scenario, wherein the one or more interactivity scores are used as weights of prediction errors for the first and second set of predicted trajectory distributions; and controlling the ego-agent based on one or more of the interactivity scores between the ego-agent and one or more of the non-ego-agents.
Yet Mahjourian et al teaches teach … a memory storing one or more instructions; and a processor executing one or more of the instructions stored on the memory … (paragraphs 0108 and 0110: processor, memory is implemented);
as a first set of predicted trajectory distributions … based on the spatio-temporal features from the ego-agent (paragraphs 0010, 0011, 0034: a first set of distribution (conditional distribution) of trajectory based data (which can include data based upon past states and velocity(ies)) from the ego/query agent and non -ego/non-query agent(s) are generated ); and as a second set of predicted trajectory distributions … based on spatio-temporal features from one or more of the non-ego agents and no spatio-temporal features from the ego-agent (paragraphs 0011, 0012 0034: a second set distribution (marginal) of trajectory based data (that excludes /not-conditioned on the ego/query agent) is generated ); and calculating one or more interactivity scores for one or more of the agents based on a difference between the first set of predicted trajectory distributions from the first scenario and the second set of predicted trajectory distributions from the second scenario (paragraph 0058-0060: interactive score(s) are calculated based on a difference (amount of divergence) between the first and second distributions); and controlling the ego-agent based on one or more of the interactivity scores between the ego-agent and one or more of the non-ego-agents (paragraphs 0015 0016, 0030 and 0031, 0057: ego/query agent navigation behavior is controlled based upon models/predictions that are based on score that help guide the ego/query agent).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Hu et al’s ability to analyze spatio temporal features to infer state data of one or more agents and to predict future trajectory behavior(s) for the one or more agents in a first and second scenario, as taught by Mahjourian et al. The combination would have allowed Hu et al to have objectively assessed trajectory data between an ego/query agent and target agents in order to help anticipate driver interactions and improved quality of future trajectories for the ego agent (Mahjourian et al, paragraphs 0016 and 0053).
However the combination of Hu et al and Mahjourian et al does not expressly teach “… wherein the one or more interactivity scores are used as weights of prediction errors for the first and second set of predicted trajectory distributions”.
Yet Kim et al teaches wherein the one or more interactivity scores are used as weights of prediction errors for the first and second set of predicted trajectory distributions (page 169350, equation # (3): a KL Divergence term within a loss function is used as a weighting value/error of two trajectory distributions to balance the difference magnitude squared term in the loss function).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Hu et al and Mahjourian et al’s ability to determine an interactivity score with KL divergence, such that the score is used as a balancing term (to weight) used in combination with the reconstruction term (difference magnitude squared) within a loss function, as taught by Kim et al. The combination would have allowed construction of new and plausible trajectory datapoints when predicting trajectories.
With regards to claim 4. The system for navigation based on internal state inference and interactivity estimation of claim 1, the combination of Hu et al, Mahjourian et al and Kim et al teaches wherein one or more of the historical observations of one or more of the agents is a position or a velocity (as similarly explained in the rejection of claim 1, Mahjourian et al teaches past data can include velocity data ( paragraph 0034)), and is rejected under similar rationale.
With regards to claim 5. The system for navigation based on internal state inference and interactivity estimation of claim 1, Hu et al teaches wherein the extracting the spatio-temporal features from one or more of the historical observations of one or more of the agents is performed by a graph-based encoder (Figure 2, page 6321, section 3: An NMMP module/encoder references an associated set of LSTMs to take observed positional data, and also trajectory of one or more agents to produce encoded actor embedding (NMMP module is an interaction graph having message passing between LSTM embedded trajectory data layers)), and is rejected under similar rationale.
With regards to claim 6. The system for navigation based on internal state inference and interactivity estimation of claim 5, Hu et al teaches wherein the graph-based encoder includes a first long-short term memory (LSTM) layer, a graph message passing layer, and a second LSTM layer (as similarly explained in the rejection of claim 5, NMMP is implemented to use multiple LSTMs and message passing), and is rejected under similar rationale.
With regards to claim 7. The system for navigation based on internal state inference and interactivity estimation of claim 6, Hu et al teaches wherein the graph message passing layer is positioned between the first LSTM layer and the second LSTM layer (as similarly explained in the rejection of claim 5, the NMMP module is an interaction graph having message passing between LSTM embedded trajectory data layers), and is rejected under similar rationale.
With regards to claim 8. The system for navigation based on internal state inference and interactivity estimation of claim 6, Hu et al teaches wherein an output of the first LSTM layer and an output of the second LSTM layer is concatenated to generate final embeddings (Figure 2, page 6321, eq 2a and eq 2b: output of the LSTM layer data is concatenated /accumulated to produce final interacted Actor Embedding data), and is rejected under similar rationale.
With regards to claim 10. The system for navigation based on internal state inference and interactivity estimation of claim 1, the combination of Hu et al, Mahjourian et al and Kim et al teaches wherein Kullback-Leibler (KL) divergence is used to measure the difference between the first scenario and the second scenario (as similarly explained in the rejection of claim 1, Mahjourian et al was explained to use Kullback-Leibler to measure difference between two scenarios scenarios), and is rejected under similar rationale.
With regards to claim 11 the combination of Hu et al, Mahjourian et al and Kim et al teaches a computer-implemented method for navigation based on internal state inference and interactivity estimation, comprising training a policy for autonomous navigation by: extracting spatio-temporal features from one or more historical observations of two or more agents within a simulation environment including an ego-agent and one or more non ego-agents; analyzing the spatio-temporal features to infer one or more internal states of one or more of the agents; predicting one or more future behaviors for of the one or more of the agents: as a first set of predicted trajectory distributions in a first scenario including an existence of the ego-agent within the simulation environment based on the spatio-temporal features from the ego-agent; and as a second set of predicted trajectory distributions in a second scenario excluding the existence of the ego-agent within the simulation environment based on spatio-temporal features from one or more of the non-ego agents and no spatio-temporal features from the ego-agent; calculating one or more interactivity scores for one or more of the agents based on a difference between the first set of predicted trajectory distributions from the first scenario and the second set of predicted trajectory distributions from the second scenario, wherein the one or more interactivity scores are used as weights of prediction errors for the first and second set of predicted trajectory distributions; and controlling the ego-agent based on one or more of the interactivity scores between the ego-agent and one or more non ego-agents, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 14. The computer-implemented method for navigation based on internal state inference and interactivity estimation of claim 11, the combination of Hu et al, Mahjourian et al and Kim et al teaches wherein one or more of the historical observations of one or more of the agents is a position or a velocity, as similarly explained in the rejection of claim 4, and is rejected under similar rationale.
With regards to claim 15. the combination of Hu et al, Mahjourian et al and Kim et al teaches a navigation based on internal state inference and interactivity estimation autonomous vehicle, comprising: a memory storing one or more instructions; a storage drive storing a policy for autonomous navigation; a processor executing one or more of the instructions stored on the memory to perform autonomous navigation by utilizing the policy for autonomous navigation, wherein the policy for autonomous navigation is trained by: extracting spatio-temporal features from one or more historical observations of two or more agents within a simulation environment including an ego-agent and one or more non-ego agents; analyzing the spatio-temporal features to infer one or more internal states of one or more of the agents; predicting one or more future behaviors for the one or more of the agents: as a first set of predicted trajectory distributions in a first scenario including an existence of the ego-agent within the simulation environment based on the spatio-temporal features from the ego-agent; and as a second set of predicted trajectory distributions in a second scenario excluding the existence of the ego-agent within the simulation environment based on spatio-temporal features from one or more of the non-ego-agents and no spatio-temporal features from the ego-agent; calculating one or more interactivity scores for one or more of the agents based on a difference between the first set of predicted trajectory distributions from the first scenario and the second set of predicted trajectory distributions from the second scenario, wherein the one or more interactivity scores are used as weights of prediction errors for the first and second set of predicted trajectory distributions; and a controller controlling the navigation of the autonomous vehicle based on internal state inference and interactivity estimation autonomous vehicle according to the policy for autonomous navigation and inputs from a vehicle sensor, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
With regards to claim 18. The navigation based on internal state inference and interactivity estimation autonomous vehicle of claim 15, the combination of Hu et al, Mahjourian et al and Kim et al teaches wherein one or more of the historical observations of one or more of the agents is a position or a velocity, as similarly explained in the rejection of claim 4, and is rejected under similar rationale.
With regards to claim 19. The navigation based on internal state inference and interactivity estimation autonomous vehicle of claim 15, the combination of Hu et al and Mahjourian et al teaches wherein the extracting the spatio-temporal features from one or more of the historical observations of one or more of the agents is performed by a graph-based encoder, as similarly explained in the rejection of claim 5, and is rejected under similar rationale.
With regards to claim 20. The navigation based on internal state inference and interactivity estimation autonomous vehicle of claim 19, the combination of Hu et al, Mahjourian et al and Kim et al teaches wherein the graph-based encoder includes a first long-short term memory (LSTM) layer, a graph message passing layer, and a second LSTM layer, as similarly explained in the rejection of claim 6, and is rejected under similar rationale.
Claim(s) 2, 3, 12, 13, 16, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al (“Collaborative Motion Prediction via Neural Motion Message Passing”, publisher: Computer Vision Foundation, published: June 2020, pages 6319-6328) in view of Mahjourian et al (US Application: US 2022/0135086, published: May 5, 2022, filed: Oct. 29, 2021) in view of Kim et al (“Driving Style-Based Conditional Variational Autoencoder for Prediction of Ego Vehicle Trajectory”, published: Dec. 24, 2021, publisher: IEEE Access, pages 169348-169356) in view of Engstrom et al (US Patent: 11447142, issued: Sep. 20, 2022, filed: May 16, 2019).
With regards to claim 2. The system for navigation based on internal state inference and interactivity estimation of claim 1, the combination of Hu et al, Mahjourian et al and Kim et al teaches wherein the calculating one or more interactivity scores for one or more of the agents, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However Hu et al, Mahjourian et al and Kim et al does not expressly teach … the calculating … is based on counter factual prediction
Yet Engstrom et al teaches … the calculating … is based on counter factual prediction (Abstract, Figure 1, Figure 3, column 1, lines 1-67, column 3, lines 43-53, column 4, lines 38-49, column 12, lines 56-67, column 13, lines 1-10 and column 16, lines 1-32: interaction/behavior between actors are scored (using a computer/processing system) with respect to a counter factual metrics (surprise metric), given sensed/perceived data and the scores are used to determine a decision level of navigation yield-action for the ego-vehicle).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Hu et al, Mahjourian et al and Kim et al’s ability to analyze spatio temporal features to calculate one or more interactivity score(s) and assess how to control the ego vehicle based on the score(s) such that the different sets of the trajectories could have assessed using counter factual methods to glean /arrive-at a desired vehicular /ego-vehicle navigation action), as taught by Engstrom et al. The combination would have allowed for evaluating how surprising a particular action would be to other road users (Engstrom et al, column 3, lines 31-40).
With regards to claim 3. The system for navigation based on internal state inference and interactivity estimation of claim 1, the combination of Hu et al, Mahjourian et al, Kim et al and Engstrom et al teaches wherein one or more of the internal states is an aggressiveness level or a yielding level (as similarly explained in the rejection of claim 2, Engstrom teaches a level of yielding), and is rejected under similar rationale.
With regards to claim 12. The computer-implemented method for navigation based on internal state inference and interactivity estimation of claim 11, the combination of Hu et al, Mahjourian et al, Kim et al and Engstrom et al teaches wherein the calculating one or more interactivity scores for one or more of the agents is based on counter factual prediction, as similarly explained in the rejection of claim 2, and is rejected under similar rationale.
With regards to claim 13. The computer-implemented method for navigation based on internal state inference and interactivity estimation of claim 11, the combination of Hu et al, Mahjourian et al, Kim et al and Engstrom et al teaches wherein one or more of the internal states is an aggressiveness level or a yielding level, as similarly explained in the rejection of claim 3, and is rejected under similar rationale.
With regards to claim 16. The navigation based on internal state inference and interactivity estimation autonomous vehicle of claim 15, the combination of Hu et al, Mahjourian et al, Kim et al and Engstrom et al teaches wherein the calculating one or more interactivity scores for one or more of the agents is based on counter factual prediction, as similarly explained in the rejection of claim 2, and is rejected under similar rationale.
With regards to claim 17. The navigation based on internal state inference and interactivity estimation autonomous vehicle of claim 15, the combination of Hu et al, Mahjourian et al, Kim et al and Engstrom et al teaches wherein one or more of the internal states is an aggressiveness level or a yielding level, as similarly explained in the rejection of claim 3, and is rejected under similar rationale.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al (“Collaborative Motion Prediction via Neural Motion Message Passing”, publisher: Computer Vision Foundation, published: June 2020, pages 6319-6328) in view of Mahjourian et al (US Application: US 2022/0135086, published: May 5, 2022, filed: Oct. 29, 2021) in view of Kim et al (“Driving Style-Based Conditional Variational Autoencoder for Prediction of Ego Vehicle Trajectory”, published: Dec. 24, 2021, publisher: IEEE Access, pages 169348-169356) in view of Naghshvar et al (US Application: US 20200150672, published: May 14, 2020, filed: Nov. 13, 2019).
With regards to claim 9. The system for navigation based on internal state inference and interactivity estimation of claim 1, the combination of Hu et al, Mahjourian et al and Kim et al teaches wherein the training the policy for autonomous navigation, as similarly explained in the rejection of claim 1, and is rejected under similar rationale.
However the combination does not expressly teach .. the training … is based on a Partially Observable Markov Decision Process (POMDP).
Yet Naghshvar et al teaches .. the training … is based on a Partially Observable Markov Decision Process (POMDP) (paragraph 0026: training includes modeling as POMDP for driving and other sequential decision making processes).
It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Hu et al, Mahjourian et al and Kim et al’s ability to implement training for autonomous navigation, such that the training would have been based on POMDP, as taught by Naghshvar et al . The combination would have improved reinforcement learning systems while considering uncertainty of state-action value functions before determining how to proceed with selecting an action corresponding to a state-action value function (Naghshvar et al, paragraph 0005).
Response to Arguments
Applicant's arguments filed 12/23/2025 have been fully considered but they are not persuasive.
The applicant argues with respect to claim 1, that as amended, the prior art does not address the limitations where the interactivity scores may be used as the weights of prediction errors in the loss function. These amendments have changed the scope of the invention and the examiner respectfully directs applicant’s attention to the rejection of claim 1 above for an explanation as to how new reference (Kim et al) is combined with Hu et al and Mahjourian et al to teach the limitations of newly amended claim 1.
The applicant argues claims 11 and 15 are allowable for reasons presented by the applicant for claim 1. However this argument is not persuasive since claim 1 has been shown/explained to be rejected above.
The applicant argues claims 2-10, 12-14 and 16-20 are allowable for their dependency upon claims 1, 11 or 15. However this argument is not persuasive since claims 1, 11 and 15 have been shown/explained to be rejected above.
With regards to 35 USC 101, Step 2A, prong One, the applicant argues that the claims are not directed to a mental process because they require controlling the ego agent based on one or more of the interactivity scores…”. However the examiner notes that the ‘control’ aspect was addressed in the prior rejections in prong two, and were not recited for prong one as alleged by the applicant. The examiner directs applicant’s attention to the rejection of prong two of the rejections above for an explanation as to how ‘control’ is addressed.
With regards to Step 2A, prong two, the applicant argues improvement is due to ‘calculating’ and ‘using the interactivity scores as weights’. However this argument is not persuasive since these aspects have been addressed in prong one for mental steps. The applicant further argues that these scores help improve technology because the ‘interactivity scores as weights’ are used. However use of interactivity scores as explained above, is a mental step and ‘usage’ of scores is not persuasive as improving technology (rather as explained in prong two above, using the scores such at a high level of generality (such as with generic computer components) amount to no more than mere instructions to apply the exception using generic computer component(s) and therefore fails to provide an improvement to the technology or technical field. The courts have identified using the words ‘apply it’ (or an equivalent) with the judicial exception to be insufficient to integrating a judicial exception into a practical application (does not integrate a judicial exception in to a practical application). The examiner also further notes the ‘control’ step is also addressed in step two above and is repeated here for convenience for being insufficient to be considered integrated into technology:
“controlling the ego-agent based on one or more of the interactivity scores between the ego-agent and one or more of the non-ego-agents”, these additional elements also amount to no more than mere instructions to apply the exception where the ‘ego’ agent could be a generic computer component that is used to apply an instructions to execute ‘control’ instructions (such as control-navigation instructions). It is noted that the ‘controlling’ in the claim does not limit or describe what aspect of the ego-agent is being controlled, and thus it can be interpreted that the ego-agent is directed to execute a type of instruction (such as a ‘control’ type instruction or a control-navigation type instruction). The courts have identified using the words ‘apply it’ (or an equivalent) with the judicial exception to be insufficient to integrating a judicial exception into a practical application (does not integrate a judicial exception in to a practical application)”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILSON W TSUI/Primary Examiner, Art Unit 2172