Prosecution Insights
Last updated: April 19, 2026
Application No. 18/961,764

SCENE MODELING USING TRAJECTORY PREDICTIONS AND TOKENIZED FEATURES

Non-Final OA §101§102§103
Filed
Nov 27, 2024
Examiner
MATTA, ALEXANDER GEORGE
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
98 granted / 137 resolved
+19.5% vs TC avg
Strong +23% interview lift
Without
With
+22.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
42 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
8.5%
-31.5% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
21.7%
-18.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 137 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim(s) 1-20 is pending for examination. This Action is made NON-FINAL. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim(s) 1-10 and 19-20 are directed to apparatus and 11-18 are directed to a method. Therefore, claim(s) 1-20 are within at least one of the four statutory categories. Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claims 1, 11, and 19 includes limitations that recite an abstract idea (emphasized below) and claim 19 will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites: One or more processors comprising: one or more circuits to: obtain scene data associated with movement of one or more agents relative to a machine navigating through an environment; encode the scene data to determine one or more latent representations of the movement of the one or more agents relative to the machine navigating through the environment; determine a joint scene mode distribution based at least on the one or more latent representations; and decode the joint scene mode distribution into one or more trajectory predictions and one or more categorical predictions for at least one agent of the one or more agents. The examiner submits that the foregoing bolded limitation(s) constitute a mental process because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, A human can observe and recall the scene around the vehicle while driving. They can make note of the how the vehicles are moving around their vehicle. The can make mental note of the different possible trajectories the vehicles around them may take in the future. They can make decision on what they believe the vehicles around them are most likely to do. Accordingly, the claim recites at least one abstract idea. Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): One or more processors comprising: one or more circuits to: obtain scene data associated with movement of one or more agents relative to a machine navigating through an environment; encode the scene data to determine one or more latent representations of the movement of the one or more agents relative to the machine navigating through the environment; determine a joint scene mode distribution based at least on the one or more latent representations; and decode the joint scene mode distribution into one or more trajectory predictions and one or more categorical predictions for at least one agent of the one or more agents. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “One or more processors comprising: one or more circuits to:” the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer to perform the process. The “one or more processors comprising: one or more circuits to:” merely describes how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment. The vehicle control system is recited at a high level of generality and merely automates the evaluating steps. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Regarding Step 2B of the 2019 PEG, representative independent claim # does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using one or more processors comprising: one or more circuits to:” amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. Dependent claim(s) 2-10, 12-18, and 20 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application. Claims 2-4, 6-10, 12-14, 16-18, and 20 merely elaborate on the mental process discussed in the analysis of claim 1 above. Claims 5 and 15 elaborate on the mental process discussed in the analysis of claim 1 above while also adding the additional limitation of “execute a graph neural network (GNN)”. “Execute a graph neural network (GNN)” amounts to nothing more than applying the exception using a generic computing environment. Generally applying an exception using a generic computer enviornment cannot provide an inventive concept. Therefore, dependent claims 2-10, 12-18, and 20 are not patent eligible under the same rationale as provided for in the rejection of 1. Therefore, claim(s) 1-20 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 9-12, 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cui et al. (US 20220153309 A1, hereinafter known as Cui). Regarding claim 1, Cui teaches One or more processors comprising: one or more circuits to: {Para [0006] “In an aspect, the present disclosure provides a computer-implemented method for motion forecasting and planning. The method may include determining (e.g., by a computing system including one or more processors, etc.) a plurality of actors within an environment of an autonomous vehicle from sensor data descriptive of the environment. The method may include determining (e.g., by the computing system, etc.) a plurality of future motion scenarios based on the sensor data by modeling a joint distribution of predicted actor trajectories for the plurality of actors. The method may include determining (e.g., by the computing system, etc.) an estimated probability for the plurality of future motion scenarios. The method may include generating (e.g., by the computing system, etc.) a contingency plan for motion of the autonomous vehicle, wherein the contingency plan includes at least one initial short-term trajectory and a plurality of subsequent long-term trajectories associated with the plurality of future motion scenarios, and wherein the contingency plan is generated based on the plurality of future motion scenarios and the estimated probability for the plurality of future motion scenarios.” } obtain scene data associated with movement of one or more agents relative to a machine navigating through an environment; {Para [0055] “In some implementations, the sensor(s) 235 can include at least two different types of sensor(s). For instance, the sensor(s) 235 can include at least one first sensor (e.g., the first sensor(s) 115, etc.) and at least one second sensor (e.g., the second sensor(s) 120, etc.). The at least one first sensor can be a different type of sensor than the at least one second sensor. For example, the at least one first sensor can include one or more image capturing device(s) (e.g., one or more cameras, RGB cameras, etc.). In addition, or alternatively, the at least one second sensor can include one or more depth capturing device(s) (e.g., LIDAR sensor, etc.). The at least two different types of sensor(s) can obtain sensor data indicative of one or more static or dynamic objects within an environment of the vehicle 205.” Para [0107] “” } encode the scene data to determine one or more latent representations of the movement of the one or more agents relative to the machine navigating through the environment; { Para [0107] “Referring still to 602, in some instances, determining a plurality of actors within the environment can include processing features from the sensor data (e.g., LIDAR data 302 of FIG. 3) and corresponding map data (e.g., HD map data 304 of FIG. 3) with a first machine-learned model (e.g., backbone CNN 306 of FIG. 3) to generate one or more object detections corresponding to the plurality of actors (e.g., object detections 308 of FIG. 3). In some instances, determining a plurality of actors within the environment can also include processing the one or more object detections (e.g., object detections 308 of FIG. 3) with a second machine-learned model (e.g., actor CNN 312 of FIG. 3) to generate a respective feature vector defining a local context for one or more of the plurality of actors.” Para [0108] “At 604, the method 600 can include determining a plurality of future motion scenarios based on the sensor data by modeling a joint distribution of predicted actor trajectories for the plurality of actors. In some instances, determining a plurality of future motion scenarios can include evaluating a diversity objective that rewards sampling of the plurality of future motion scenarios that require distinct reactions from the autonomous vehicle. For example, the scenario scorer model 318 of FIG. 3 can evaluate a diversity objective that rewards sampling of the plurality of future motion scenarios that require distinct reactions from the autonomous vehicle. In some instances, determining a plurality of future motion scenarios can include mapping a shared noise across a joint set of latent variables that are distinct from one another to determine the plurality of future motion scenarios. For example, the diverse sampler 314 of FIG. 3 can map the shared noise. In some instances, a GNN can be employed for the mapping of the shared noise across the joint set of latent variables. For example, the diverse sampler 314 of FIG. 3 can employ a GNN.” } determine a joint scene mode distribution based at least on the one or more latent representations; and {Para [0108] “At 604, the method 600 can include determining a plurality of future motion scenarios based on the sensor data by modeling a joint distribution of predicted actor trajectories for the plurality of actors. In some instances, determining a plurality of future motion scenarios can include evaluating a diversity objective that rewards sampling of the plurality of future motion scenarios that require distinct reactions from the autonomous vehicle. For example, the scenario scorer model 318 of FIG. 3 can evaluate a diversity objective that rewards sampling of the plurality of future motion scenarios that require distinct reactions from the autonomous vehicle. In some instances, determining a plurality of future motion scenarios can include mapping a shared noise across a joint set of latent variables that are distinct from one another to determine the plurality of future motion scenarios. For example, the diverse sampler 314 of FIG. 3 can map the shared noise. In some instances, a GNN can be employed for the mapping of the shared noise across the joint set of latent variables. For example, the diverse sampler 314 of FIG. 3 can employ a GNN.” Para [0118] “At 804, the method 800 can include determining a plurality of future traffic scenarios based on the sensor data, wherein the plurality of future traffic scenarios are determined by modeling a joint distribution of actor trajectories for the plurality of actors. In some instances, determining a plurality of future traffic scenarios can include evaluating a diversity objective that rewards sampling of the plurality of future traffic scenarios that require distinct reactions from the autonomous vehicle. For example, the scenario scorer model 318 of FIG. 3 can evaluating a diversity objective that rewards sampling of the plurality of future traffic scenarios that require distinct reactions from the autonomous vehicle. In some instances, determining a plurality of future traffic scenarios can include mapping a shared noise across a joint set of latent variables that are distinct from one another to determine the plurality of future traffic scenarios. For example, the diverse sampler 314 of FIG. 3 can map the shared noise. In some instances, a GNN can be employed for the mapping of the shared noise across the joint set of latent variables. For example, the diverse sampler 314 of FIG. 3 can employ a GNN.” } decode the joint scene mode distribution into one or more trajectory predictions and one or more categorical predictions for at least one agent of the one or more agents. {Para [0109] “At 606, the method 600 can include determining an estimated probability for the plurality of future motion scenarios. In some instances, determining an estimated probability for the plurality of future motion scenarios can employ a GNN to output a score corresponding to the estimated probability for the plurality of future motion scenarios. For example, the scenario scorer model 318 of FIG. 3 can employ a GNN.” Para [0119] “At 806, the method 800 can include determining an estimated probability of the plurality of future traffic scenarios. In some instances, determining an estimated probability for the plurality of future traffic scenarios can employ a GNN such that the GNN is augmented to output a score corresponding to the estimated probability for the plurality of future traffic scenarios. For example, the scenario scorer model 318 of FIG. 3 can employ a GNN.” } Regarding claim 2, Cui teaches The one or more processors of claim 1, wherein to obtain the scene data, the one or more circuits are to: obtain the scene data based at least on execution of a perception system, wherein the perception system is configured to generate the scene data based at least on sensor data generated by one or more sensors of the machine representing positions of the one or more agents relative to the machine. {Para [0106] “At 602, the method 600 can include determining a plurality of actors within an environment of an autonomous vehicle from sensor data descriptive of the environment. In particular, the sensor data can be obtained of a surrounding environment by employing an autonomous vehicle (e.g., a three-hundred-and-sixty-degree view). The autonomous vehicle can include a computing system. For example, a computing system (e.g., autonomous vehicle 105, vehicle computing system 210, operations computing system(s) 290A, remote computing system(s) 290B, etc.) can obtain sensor data. As another example, the environment can include a real-world environment or a simulated environment. In some instances, the sensor data obtained at 602 can include LIDAR input 302 and HD Map input 304 as depicted in FIG. 3.” Para [0119] “Referring still to 802, in some instances, determining a plurality of actors within the environment can include processing features from the sensor data (e.g., LIDAR data 302 of FIG. 3) and corresponding map data (e.g., HD map data 304 of FIG. 3) with a first machine-learned model (e.g., backbone CNN 306 of FIG. 3) to generate one or more object detections corresponding to the plurality of actors (e.g., object detections 308 of FIG. 3). In some instances, determining a plurality of actors within the environment can also include processing the one or more object detections (e.g., object detections 308 of FIG. 3) with a second machine-learned model (e.g., actor CNN 312 of FIG. 3) to generate a respective feature vector defining a local context for one or more of the plurality of actors.” } Regarding claim 9, Cui teaches The one or more processors of claim 1, wherein to obtain the scene data, the one or more circuits are to: obtain the scene data based at least on execution of a perception system, wherein the perception system is configured to generate the scene data based at least on sensor data generated by one or more sensors of the machine representing positions of the one or more agents relative to the machine. {Para [0110] “At 608, the method 600 can include generating a contingency plan for motion of the autonomous vehicle, wherein the contingency plan includes at least one initial short-term trajectory and a plurality of subsequent long-term trajectories associated with the plurality of future motion scenarios, and wherein the contingency plan is generated based on the plurality of future motion scenarios and the estimated probability for the plurality of future motion scenarios. For instance, generating a contingency plan can leverage optimizing a planner cost function including a linear combination of subcosts that encode different aspects of driving, the different aspects of driving including two or more of comfort, motion rules, or route. For example, a planner cost function can be leveraged by the contingency planner 320 of FIG. 3.” Para [0120] “At 808, the method 800 can include generating a contingency plan for motion of the autonomous vehicle, wherein the contingency plan includes at least one initial short-term trajectory and a plurality of subsequent long-term trajectories associated with the plurality of future traffic scenarios, and wherein the contingency plan is generated based on the plurality of future traffic scenarios and the estimated probability for the plurality of future motion scenarios. For instance, generating a contingency plan can leverage optimizing a planner cost function including a linear combination of subcosts that encode different aspects of driving, the different aspects of driving including two or more of comfort, motion rules, or route. For example, a planner cost function can be leveraged by the contingency planner 320 of FIG. 3.” Where executing the command to generate a contingency plan using the agent trajectory and traffic scenario can be considered as generating a prompt based at least on the one or more trajectory predictions and the one or more categorical predictions. The formatting of the prompt has not been defined. Any instruction that initiates action can be considered a prompt. } Regarding claim 10, Cui teaches wherein the one or more processors are comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system implemented using a robot; an aerial system; a medical system; a boating system; a smart area monitoring system; a system for performing deep learning operations; a system for performing simulation operations; a system for generating or presenting virtual reality (VR) content, augmented reality (AR) content, or mixed reality (MR) content; a system for performing digital twin operations; a system implemented using an edge device; a system incorporating one or more virtual machines (VMs); a system for generating synthetic data; a system implemented at least partially in a data center; a system for performing conversational artificial intelligence (AI) operations; a system for performing generative AI operations; a system implementing language models; a system for performing generative AI operations; a system for implementing vision language models (VLMs); a system for implementing large language models (LLMs); a system for implementing multi-modal language models; a system implemented using one or more cloud-hosted microservices; a system for hosting one or more real-time streaming applications; a system for performing light transport simulation; a system for performing collaborative content creation for 3D assets; or a system implemented at least partially using cloud computing resources. {Para [0042] “FIG. 2 depicts an example system overview of the autonomous vehicle as an autonomous vehicle according to example implementations of the present disclosure. More particularly, FIG. 2 illustrates a vehicle 205 including various systems and devices configured to control the operation of the vehicle 205. For example, the vehicle 205 can include an onboard vehicle computing system 210 (e.g., located on or within the autonomous vehicle, etc.) that is configured to operate the vehicle 205. For example, the vehicle computing system 210 can represent or be an autonomous vehicle control system configured to perform the operations and functions described herein. Generally, the vehicle computing system 210 can obtain sensor data 255 from sensor(s) 235 (e.g., sensor(s) 115, 120 of FIG. 1, etc.) onboard the vehicle 205, attempt to comprehend the vehicle's surrounding environment by performing various processing techniques on the sensor data 255, and generate an appropriate motion plan through the vehicle's surrounding environment (e.g., environment 110 of FIG. 1, etc.).” } Regarding claim 11, it recites A method having limitations similar to those of claim 1 and therefore is rejected on the same basis. Regarding claim 12, it recites A method having limitations similar to those of claim 2 and therefore is rejected on the same basis. Regarding claim 19, it recites A system having limitations similar to those of claim 1 and therefore is rejected on the same basis. Regarding claim 20, it recites A method having limitations similar to those of claim 10 and therefore is rejected on the same basis. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Cui et al. (US 20220153309 A1, hereinafter known as Cui) in view of Zhang et al. (US 20220153315 A1, hereinafter known as Zhang). Regarding Claim 3, Cui teaches The one or more processors of claim 1, wherein, to encode the scene data, the one or more circuits are to: determine one or more latent representations comprising first pairwise relationships between pairs of agents of the one or more agents { Para [0107] “Referring still to 602, in some instances, determining a plurality of actors within the environment can include processing features from the sensor data (e.g., LIDAR data 302 of FIG. 3) and corresponding map data (e.g., HD map data 304 of FIG. 3) with a first machine-learned model (e.g., backbone CNN 306 of FIG. 3) to generate one or more object detections corresponding to the plurality of actors (e.g., object detections 308 of FIG. 3). In some instances, determining a plurality of actors within the environment can also include processing the one or more object detections (e.g., object detections 308 of FIG. 3) with a second machine-learned model (e.g., actor CNN 312 of FIG. 3) to generate a respective feature vector defining a local context for one or more of the plurality of actors.” } Cui does not teach, second pairwise relationships between at least one agent of the one or more agents and a lane segment of a plurality of lane segments of the environment. However, Zhang teaches wherein, to encode the scene data, the one or more circuits are to: determine one or more latent representations comprising first pairwise relationships between pairs of agents of the one or more agents and second pairwise relationships between at least one agent of the one or more agents and a lane segment of a plurality of lane segments of the environment. {Para [0076-0082] “To improve the performance of an autonomous platform, such as the autonomous vehicle of FIG. 2, the technology of present disclosure can leverage actor data and map data to generate both actor-specific graphs and a global graph to account for actor-specific contexts, map topology, and actor-to-actor interactions. For example, FIG. 3 depicts an example of such a graph and a corresponding scene. FIG. 3 illustrates a first actor 302 (e.g., a first vehicle) and a second actor 308 (e.g., a second vehicle), traversing a travel way. The travel way can be within an environment of an autonomous vehicle (not shown in FIG. 3). The travel way can include, for example, a roadway. The environment can include a plurality of lanes (e.g., vehicle travel lanes). The first actor 302 may desire to turn from a first road 310, onto a different road 304, thus departing a first lane on the road 310. The second actor 308 may desire to continue straight, moving forward in its lane on road 310. Using the technology of the present disclosure, a computing system (e.g., an autonomous vehicle control system, another system associated with an autonomous vehicle) can better forecast the motion of each actor based on the actors' past motion, current position within the lane topology of the environment, and a potential interaction between the two actors. To help do so, the computing system can represent an actor and its context by constructing actor-specific graphs 306 and 312 using a machine-learned model framework (e.g., including neural networks). An actor-specific graph can include nodes, edges, and/or node embeddings. For example, the actor-specific graph 306 can include nodes 316 that represent lane segments of the lanes within the environment that are relevant to an actor 302. For example, each lane can be composed of a plurality of consecutive lane segments. The lane segments can be short segments along the centerline of the lane. A lane segment can have relationships with another segment in the same lane or in another lane (e.g., a pairwise relationship). The lanes that are relevant to an actor can include the lanes within a region of interest to the actor. This can include, for example, lane(s) in which the actor has previous traveled, is currently travelling, and/or is predicted to travel (e.g., based on past motion, current location, heading, etc.) and/or adjacent lanes thereto. The relevant lanes can therefore include lane segments that are also relevant to the actor. The actor-specific graph 306 can include edges 318 that represent the relationships between the lane segments. For example, the edges 318 can indicate that a particular lane segment is left of another lane segment, right of another lane segment, a predecessor of another lane segment, and/or a successor of another lane segment. The actor-specific graph 306 can include node embeddings (e.g., as shown in FIG. 5 and further described herein) that encode the past motion of the actor and map features. For example, the node embeddings of actor-specific graph 306 can include a plurality of node embeddings that are indicative of at least one lane feature of at least one lane segment and a past motion of the first actor 302. A lane feature can include at least at least one of: (i) a geometric feature or (ii) a semantic feature of a respective lane segment. Geometric features can be descriptive of the geometry/layout of the respective lane segment. For example, geometric feature(s) can indicative at least one of: (1) a center location of the at least one lane segment, (2) an orientation of the at least one lane segment, or (3) a curvature of the at least one lane segment. Semantic features can include binary features of the lane segment. These can help describe the nature and intended purpose of the associated lane. For example, semantic feature(s) can indicate at least one of (1) a type of the at least one lane segment (e.g., turning lane, merging lane, exit ramp) or (2) an association of the at least one lane segment with a traffic sign, a traffic light and/or another type of traffic element. Using this structure, each actor-specific graph 306, 312 can focus on the lane topology that is relevant to the specific actor 302, 308 associated with the actor-specific graph, given the respective actor's past motion, current position, and/or heading. Actor-specific graphs 306, 312 naturally preserve the map structure of the environment and capture more fine-grained information, as each node embedding can represent the local context within a smaller region relevant to the respective actor 302, 308 rather than trying to capture the entire scene. A computing system can utilize the actor-specific graphs 306, 312 to help determine an interaction between actors. For example, the computing system can determine an interaction between the first actor 302 and the second actor 308 at least in part by propagating features between the first actor-specific graph 306 and the second actor-specific graph 312. This can include generating a global graph 314 based on the plurality of actor-specific graphs 306, 312. The global graph 314 can be associated with the plurality of actors (e.g., first actor 302 and second actor 308) and the plurality of lanes of the environment (e.g., the lanes relevant to each actor-specific graph). The global graph 314 can allow the computing system (e.g., of an autonomous vehicle) to determine which actors may interact with one another by propagating information over the global graph 314 (e.g., through message passing). To account for the potential interactions on a per actor level, the computing system can distribute the interactions determined using the global graph 314 to the individual actor-specific graphs 306, 312. This can allow the actor-specific graphs to reflect the interactions between actors in the environment. For example, by distributing the interactions determined through the global graph 314, the first actor-specific graph 306 can reflect the potential interactions of the first actor 302 with respect to the second actor 308. Likewise, the second actor-specific graph 312 can reflect the potential interactions of the second actor 308 with respect to the first actor 302. The computing system can then predict a motion trajectory for an actor based on the associated actor-specific graphs 306 and 312 (which capture the actor-to-actor interactions and actor-to-map relations). For example, the computing system can determine a predicted motion trajectory of the first actor 302 based on the interaction (between the actors 302, 308) and the first actor-specific graph 306 such that the first actor 302 avoids the second actor 308. Additionally, or alternatively, the computing system can determine a predicted motion trajectory of the second actor 308 based on the interaction (between the actors 302, 308) and the second actor-specific graph 312 such that the second actor 308 avoids the second actor 302. To generate the actor-specific graphs and the global graphs, as well as predict actor motion trajectories, a computing system can leverage a machine-learned model framework. FIGS. 4A and 4B are diagrams of such a computing system 400 and a machine-learned model framework 450, according to some implementations of the present disclosure. FIG. 4A depicts an example system 400 configured to perform actor motion forecasting within the surrounding environment of an autonomous platform. The computing system 400 can be, for example, an autonomous vehicle control system for an autonomous vehicle. The computing system 400 can be included in and/or include any of the system(s) (e.g., autonomous platform 105, vehicle 205, vehicle computing system 210, remote computing system 290B, operations computing system 290A, etc.) described herein such as, for example, with reference to FIGS. 1, 2, etc.” } It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cui to incorporate the teachings of Zhang because it improves the forecasting of motion of actors as discussed in para [0004] of Zhang “The present disclosure is directed to improved systems and methods for forecasting the motion of actors within a surrounding environment of an autonomous platform. For instance, an autonomous vehicle can operate within an environment such as a highway scenario that includes a plurality of lanes. A plurality of actors (e.g., other vehicles) can move within the lanes. The technology of the present disclosure provides a graph-centric motion forecasting model framework that improves the ability of the autonomous vehicle to predict the motion of these actors within the lanes.” Regarding claim 13, it recites A method having limitations similar to those of claim 3 and therefore is rejected on the same basis. Claim(s) 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Cui et al. (US 20220153309 A1, hereinafter known as Cui) in view of Zhang et al. (US 20220153315 A1, hereinafter known as Zhang) and Kabzan et al. (US 20220234618 A1, hereinafter known as Kabzan). Regarding Claim 4, Cui in view of Zhang teaches The one or more processors of claim 3. Zhang further teaches wherein the one or more circuits are to: determine a lane mode distribution and {Para [0081-0082] “computing system can utilize the actor-specific graphs 306, 312 to help determine an interaction between actors. For example, the computing system can determine an interaction between the first actor 302 and the second actor 308 at least in part by propagating features between the first actor-specific graph 306 and the second actor-specific graph 312. This can include generating a global graph 314 based on the plurality of actor-specific graphs 306, 312. The global graph 314 can be associated with the plurality of actors (e.g., first actor 302 and second actor 308) and the plurality of lanes of the environment (e.g., the lanes relevant to each actor-specific graph). The global graph 314 can allow the computing system (e.g., of an autonomous vehicle) to determine which actors may interact with one another by propagating information over the global graph 314 (e.g., through message passing). To account for the potential interactions on a per actor level, the computing system can distribute the interactions determined using the global graph 314 to the individual actor-specific graphs 306, 312. This can allow the actor-specific graphs to reflect the interactions between actors in the environment. For example, by distributing the interactions determined through the global graph 314, the first actor-specific graph 306 can reflect the potential interactions of the first actor 302 with respect to the second actor 308. Likewise, the second actor-specific graph 312 can reflect the potential interactions of the second actor 308 with respect to the first actor 302. The computing system can then predict a motion trajectory for an actor based on the associated actor-specific graphs 306 and 312 (which capture the actor-to-actor interactions and actor-to-map relations). For example, the computing system can determine a predicted motion trajectory of the first actor 302 based on the interaction (between the actors 302, 308) and the first actor-specific graph 306 such that the first actor 302 avoids the second actor 308. Additionally, or alternatively, the computing system can determine a predicted motion trajectory of the second actor 308 based on the interaction (between the actors 302, 308) and the second actor-specific graph 312 such that the second actor 308 avoids the second actor 302. To generate the actor-specific graphs and the global graphs, as well as predict actor motion trajectories, a computing system can leverage a machine-learned model framework. FIGS. 4A and 4B are diagrams of such a computing system 400 and a machine-learned model framework 450, according to some implementations of the present disclosure. FIG. 4A depicts an example system 400 configured to perform actor motion forecasting within the surrounding environment of an autonomous platform. The computing system 400 can be, for example, an autonomous vehicle control system for an autonomous vehicle. The computing system 400 can be included in and/or include any of the system(s) (e.g., autonomous platform 105, vehicle 205, vehicle computing system 210, remote computing system 290B, operations computing system 290A, etc.) described herein such as, for example, with reference to FIGS. 1, 2, etc.” } Cui in view of Zhang does not teach, Determine… a homotopy distribution and determine the joint scene mode distribution based at least on … the homotopy distribution. However, Kabzan teaches Determine… a homotopy distribution and determine the joint scene mode distribution based at least on … the homotopy distribution. {Para [0094] “iven an initial state of the AV, a terminal state of the AV, a map representation and predictions of other agents in the scene, the homotopy extractor 453 finds all “approximately” feasible maneuvers the AV can perform. Note that in this context the resulting maneuvers might not be dynamically feasible but the homotopy extractor 453 guarantees that the resulting constraint set describing the maneuver is not an empty set (considering also the AV footprint). An AV maneuver is described by the homotopy. As described above, a homotopy is a subset of a set of constraints on a trajectory of an AV that the AV can adhere to while traversing a particular route. In some implementations, a homotopy can be a unique space where any path starting at a starting position (AV state) and ending at a terminal state can be continuously deformed. To find these maneuvers, the homotopy extractor 453 iterates over all possible decisions the AV can take with respect to other agents, e.g., pass on the left/right side, pass before or after or just stay behind. In short, an output of the homotopy extractor 453 describes the spatio-temporal location of the AV to an agent. Although this can be a computationally expensive search, due to a set of simple checks all infeasible combinations can be eliminated.” } It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Cui in view of Zhang to incorporate the teachings of Kabzan to use a homotropy distribution because by using homotropy “all infeasible combinations can be eliminated” as discussed in para[0094] Regarding claim 14, it recites A method having limitations similar to those of claim 4 and therefore is rejected on the same basis. Allowable Subject Matter Claim 5-8 and 15-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if the USC 101 rejection is overcome. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Joshi et al. (US 20250128734 A1) teaches in para [0035] “As part of a pre-processing task (not shown), data represented by the graph 120 may be reversibly encoded by additional layers of the neural network of the SSL model 130 that precede the attention layers 133. For instance, the SSL model 130 may further comprise embedding and encoding layers, such as a multi-layer perceptron (MLP), that may receive the graph 120 as input and apply encoding operations (such as sine/cosine embeddings for continuous data values) to generate an encoded set of features. Similarly, a second MLP may operate on the plurality of node embeddings 140 (that is, take the nodes embeddings as an input from the feed-forward layers 134) to transform them into embedded data.” Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER MATTA whose telephone number is (571)272-4296. The examiner can normally be reached Mon - Fri 10:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Lee can be reached at (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.G.M./Examiner, Art Unit 3668 /JAMES J LEE/Supervisory Patent Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Nov 27, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589770
SAFETY CONTROLLER FOR AUTOMATED DRIVING
2y 5m to grant Granted Mar 31, 2026
Patent 12570148
ACCESSORY MANAGEMENT SYSTEM THAT IDENTIFIES ACCESSORIES TO ALLOW FOR CONNECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12552253
VEHICLE AND A METHOD OF CONTROLLING A DISPLAY TO OUTPUT A VISUAL INDICATION FOR INDUCING SELECTION OF A SPECIFIC DRIVING MODE
2y 5m to grant Granted Feb 17, 2026
Patent 12534132
SYSTEM AND METHOD FOR PROVIDING A VISUAL AID FOR STEERING ANGLE OFFSET IN A STEER-BY-WIRE SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12522245
COMPUTER-IMPLEMENTED METHOD FOR MANAGING AN OPERATIONAL DESIGN DOMAIN'S EXPANSION FOR AN AUTOMATED DRIVING SYSTEM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
94%
With Interview (+22.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 137 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month