Prosecution Insights
Last updated: April 19, 2026
Application No. 18/456,218

PERCEIVING AND ASSOCIATING STATIC AND DYNAMIC OBJECTS USING GRAPH MACHINE LEARNING MODELS

Non-Final OA §103
Filed
Aug 25, 2023
Examiner
GENTILE, ALEXANDER VINCENT
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
88%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
18 granted / 24 resolved
+23.0% vs TC avg
Moderate +13% lift
Without
With
+12.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
26 currently pending
Career history
50
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
51.4%
+11.4% vs TC avg
§102
27.4%
-12.6% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/09/2025 has been entered. Status of Claims Claims 1-7, 9-17, 19-25, and 27-30 are pending and have been examined. Claims 8, 18, and 26 are canceled. Claims 1-7 , 9-17, 19-25, and 27-30 are either amended directly or via a claim they depend from. Claims 1-7 , 9-17, 19-25, and 27-30 are rejected. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/12/2025 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Regarding the claim rejections under 35 § USC 102 and 35 § USC 103 : Applicant’s arguments and corresponding amendments, see pages 14-17, filed on 11/17/2025, with respect to the rejections of claims 1-7, 9-17, 19-25, and 27-30, have been fully considered and are moot in view of the amendments which incorporate amended subject matter into independent claims 1, 11, 19, and 29. After further search and consideration, new grounds of rejection have been respectfully made in response to the amendments and will be discussed in the following section. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-7, 9-17, 19-25, and 27-30 are rejected under 35 U.S.C. 103 as being unpatentable over Pronovost et al., (US 2024/0208546 A1, hereinafter Pronovost) in view of Huang et al. (US 11,485,384 B2, hereinafter Huang) Claim 1 Discloses: (Currently Amended) “A processing system comprising: one or more memories comprising processor-executable instructions; and one or more processors configured to execute the processor-executable instructions and cause the processing system to:” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that (Paragraph [0118], Lines 4-13) “Although localization component 920, perception component 922, the prediction component 928, the planning component 930, the ML prediction model(s) 932, and/or system controller(s) 926 are illustrated as being stored in memory 918, any of these components may include processor -executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 940 or configured as part of computing device(s) 936.” “access a set of object detections, each respective object detection in the set of object detections corresponding to a respective object detected in an environment;” Pronovost teaches, (Paragraph [0014], Lines 1-8) “In some examples, the techniques discussed herein may be implemented in the context of a vehicle, such as an autonomous vehicle. When an autonomous vehicle is operating in an environment, the vehicle may use sensors to capture sensor data (e.g., image or video data, radar data, lidar data, sonar data, etc.) of the surrounding environment, and may analyze the sensor data to detect and classify objects within the environment.” Pronovost additionally teaches, (Paragraph [0013], Lines 4-7) “this application relates to training and executing ML prediction models configured to output joint trajectory predictions for multiple dynamic objects (or agents) in an environment.” “generate, based on the set of object detections, a graph representation comprising a plurality of nodes and a plurality of edges, wherein: each respective node in the plurality of nodes corresponds to a respective object detection in the set of object detections,” Pronovost teaches, (Paragraph [0057], Lines 1-12) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation. As shown in this example, the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” “and each respective edge in the plurality of edges comprises one or more features generated based on a respective pair of nodes for the respective edge;” Pronovost teaches, (Paragraph [0057], Lines 14-16) “In various implementations, the GNN may be partially connected or fully connected with separate edge features associated with distinct pairs of nodes in the GNN.” “generate a set of output features, wherein, to generate the set of output features, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to process the graph representation, including the one or more features for each respective edge, using a trained message passing network;” Pronovost teaches, (Paragraph [0056], Lines 3-8) “an autonomous vehicle (e.g., vehicle 202) may use a trained ML prediction model configured to receive input data representing the current state of the driving environment of the vehicle, and configured to output one or more predicted future states of the environment.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” “and generate a predicted object relationship graph, wherein[[,]]: to generate the predicted object relationship graph, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to: process the set of output features using a layer of a trained machine learning model; Pronovost teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.” “and for at least a first pair of nodes in the graph representation, the first pair of nodes corresponding to a first object and a second object in the environment, prune an edge connecting the first pair of nodes: to prune the edge, … and the edge is pruned based on an effect the edge has upon the output features of the first pair of nodes.” Pronovost does not teach the node pruning as described in the preceding limitations. Huang does teach the preceding limitations. Huang teaches, (Page 15, Column 5, Lines 49-54) “In some examples, the search may comprise determining a directed graph between nodes of the sets of nodes. The directed graph may comprise a connection (e.g., edge) between a first node and a second node and/or weight (e.g., cost) associated with the connection,” and that, (Page 24, Column 23, Lines 16-23) “Determining whether a node is likely to result in a collision may comprise determining the shortest a distance 428 from either circle to a nearest static or dynamic object. Determining a distance to a dynamic object may comprise determining a distance to a portion of the environment data and/or cost map associated with a likelihood at or above a threshold likelihood,” as well as, (Page 24, Column 23, Lines 36-39) “Associating the likelihood of collision with the node may comprise modifying a weight associated with the node as part of the cost determination operation at operation 444 (e.g., by increasing the cost).” Huang additionally teaches, (Page 23, Column 22, Lines 59-67 & Page Column 23, Lines 1-5) “At operation 420, example process 400 may comprise determining whether a node 422 is likely to result in a collision, according to any of the techniques discussed herein. For example, this operation may comprise representing positioning the vehicle at the node 422 in the multivariate space. This may comprise representing the vehicle as two circles having diameters equal to a width of the autonomous vehicle, where a first circle may be centered at the front axle of the autonomous vehicle (i.e., fore circle 424) and a second circle may be centered at the rear axle of the autonomous vehicle (aft circle 426), and determining a distance between the representation and a nearest static object and/or a portion of the environment predicted as being occupied by a dynamic object at a future time,” and that, (Page 24, Column 23, Lines 27-35) “At operation 430, example process 400 may comprise pruning the node that may be likely to result in a collision and/or associating the likelihood of collision with the node, according to any of the techniques discussed herein. Pruning the node may comprise removing the node from the tree or graph or associating an indication with the node that the node will result in a collision. The collision indicator may prevent the node from being used as a parent for subsequent nodes and from being selected as part of the final path.” “the one or more processors are configured to execute the processor-executable instructions to cause the processing system to process output features of the first pair of nodes using the layer of the trained machine learning model;” Huang teaches, (Page 15, Column 6, Lines 40-46) “a processor of a first type (e.g., a graphics processing unit (GPU)) may determine the cost map, generate the nodes, prune the nodes, and/or determine the path and a processor of a second type may smooth the path generated by the GPU and/or determine a trajectory for controlling the vehicle based at least in part on the smooth path,” and that, (Page 14, Column 14, Lines 55-63) “Although localization component 226, perception component 228, planning component 230, guidance system 232, map(s) 234, and/or system controller(s) 236 are illustrated as being stored in memory 220, any of these components may include processor-executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 224 or configured as part of computing device(s) 214.” Huang additionally teaches, (Page 20, Column 15, Lines 4-14) “In some examples, an ML model may comprise a neural network. An exemplary neural network is a biologically inspired algorithm which passes input data through a series of connected layers to produce an output. Each layer in a neural network can also comprise another neural network, or can comprise any number of layers (whether convolutional or not). As can be understood in the context of this disclosure, a neural network can utilize machine-learning, which can refer to a broad class of such algorithms in which an output is generated based on learned parameters,” and that, (Page 24, Column 24, Lines 44-45) “FIG. 4C depicts an example of a pruned layer of nodes 446.” Therefore, it would have been obvious to a person of ordinary skill in the art to combine the graph neural network which includes paired vehicle nodes which themselves include relative position measurements, as taught by Pronovost, with the node pruning system of Huang which measures relative positions of nodes and determines if the distance could introduce risk of a collision as taught by Huang, in order to yield predictable results. Combining the references would yield the benefits of implementing the node pruning operation to avoiding collisions during, for example, a model of an autonomous vehicle driving situation. As Huang describes, (Page 16, Column 7, Lines 4-8) “The techniques discussed herein may improve the safety of a vehicle by improving the vehicle's ability to predict movement and/or behavior of objects in the vehicle's surroundings and plan a path for the vehicle that is collision-free and economical.” Claim 2 Discloses: (Original) “The processing system of claim 1, wherein, to generate the graph representation, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to generate, for each respective node in the graph representation, a respective feature vector describing properties of a respective object in the environment.” Pronovost teaches, (Paragraph [0057], Lines 1-6) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation,” and that, (Paragraph [0040], Lines 1-4) “operation 140 may correspond to a training stage during which the training component 102 may perform backpropagation to modify agent feature vectors, edge features, etc.” Claim 3 Discloses: (Original) “The processing system of claim 2, wherein the properties of the respective object comprise at least one of:(i) a position of the respective object,(ii) a size of the respective object,(iii) an orientation of the respective object,(iv) a texture of the respective object,(v) a vulnerability measure of the respective object,(vi) a visibility of the respective object, (vii) a velocity of the respective object,(viii) an acceleration of the respective object,(ix) contents of the respective object, or (x) a status of the respective object.” Pronovost teaches, (Paragraph [0046], Lines 15-17) “a prediction system that may predict future positions, velocities, and/or accelerations of objects in the environment, ” and that, (Paragraph [0115], Lines 1-12) “In some examples, sensor data and/or perception data may be used to generate an environment state that represents a current state of the environment. For example, the environment state may be a data structure that identifies object data (e.g., object position, area of environment occupied by object, object heading, object velocity, historical object data), environment layout data (e.g., a map or sensor-generated layout of the environment), environment condition data (e.g., the location and/or area associated with environmental features, such as standing water or ice, whether it's raining, visibility metric), sensor data (e.g., an image, point cloud), etc.” Claim 4 Discloses: (Original) “The processing system of claim 1, wherein, to generate the graph representation, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to, for at least a first pair of nodes in the graph representation, the first pair of nodes corresponding to a first object and a second object in the environment: generate a first edge connecting the first pair of nodes; and generate a first feature vector describing one or more relationships between the first and second objects.” Pronovost teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN,” and that, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” Claim 5 Discloses: (Original) “The processing system of claim 4, wherein the one or more relationships between the first and second objects comprise at least one of: (i) relative distance between the first and second objects, (ii) relative velocity between the first and second objects,(iii) relative acceleration between the first and second objects,(iv) relative position between the first and second objects,(v) relative angle between the first and second objects,(vi) semantic similarity of the first and second objects, or (vii) geometric similarity of the first and second objects.” Pronovost teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN,” and that, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” Claim 6 Discloses: (Original) “The processing system of claim 1, wherein, to generate the set of output features, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to,” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that (Paragraph [0118], Lines 4-13) “Although localization component 920, perception component 922, the prediction component 928, the planning component 930, the ML prediction model(s) 932, and/or system controller(s) 926 are illustrated as being stored in memory 918, any of these components may include processor -executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 940 or configured as part of computing device(s) 936.” “for a first node in the graph representation: generate, for each respective edge connecting a respective neighbor node to the first node, a respective message vector based on a feature vector of the first node, a respective feature vector of the respective neighbor node.” Pronovost teaches, (Paragraph [0057], Lines 1-6) “executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” Pronovost additionally teaches, (Paragraph [0066], Lines 1 and 13-16) “FIG. 4B depicts another example … the predicted futures 416, 418, 420, and 422 may be determined using the features associated with a node representing a specific vehicle (e.g., vehicle 406, 408, or 410)” “and a respective feature vector of the respective edge;” Pronovost teaches, (Paragraph [0066], Lines 16-19) “and the information encoded (e.g., into the edge features between the nodes of a GNN) representing the relative information of the additional vehicles in the driving environment.” Pronovost additionally teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” Pronovost additionally teaches, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” “and generate a first output feature, wherein, to generate the first output feature, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to aggregate the respective message vectors for the first node” A person of ordinary skill in the art would understand that in order to update a node’s vector representation in a Message Passing Neural Network, message vectors are aggregated. “based on a graph convolutional layer of the trained machine learning model.” Pronovost teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.” Claim 7 Discloses: (Currently Amended) “The processing system of claim 1, wherein: to generate the predicted object relationship graph, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to, for at least a second pair of nodes in the graph representation, the second pair of nodes corresponding to a third object and a fourth object in the environment, predict an object relationship between the third and fourth objects; and to predict the object relationship between the third and fourth objects, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to process output features of the second pair of nodes using the layer of the trained machine learning model.” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations.” Pronovost additionally teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.,” and that, (Paragraph [0087], Lines 1-11) “In some cases, the input data to the ML trajectory prediction model 702 and the second ML trajectory prediction model 704 (e.g., scene representation 708) may include feature data representing the state of each agent and/or object in the environment, along with map data of the environment. Within the ML trajectory prediction model 702, one or more intermediate layers and/or components may determine the relative distances (and/or other relative state data) between the agents, and may use the relative data in a subsequent layer or component determining the overall model output (e.g., the set of predicted trajectories).” Pronovost additionally teaches, (Paragraph [0057], Lines 1-12) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation. As shown in this example, the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” Therefore, Pronovost teaches determining an overall model output comprising the sets of vehicle trajectories, the prediction model of which stores offset data between multiple paired vehicle nodes. Pronovost does not explicitly designate third and fourth objects that have their own unique pairing that is different than that of a first and second object. However, a person of ordinary skill in the art would easily conceive before the effective filling date of the claimed invention an overall model being capable of comprising an explicit pairing between a third and fourth actor, especially in view of FIG. 1, where four vehicles have their relative distances measured. As Pronovost describes, (Paragraph [0036], Lines 11-21) “the training component 102 may analyze the driving paths predicted agent trajectories 120-124 and/or the relative distances between each pair of agents (e.g., vehicles 110-114) within the predicted agent trajectories 120-124. For instance, as shown in box 132, the training component 102 may determine the distance between each pair of agents (e.g., distance 134 between vehicle 110 and vehicle 112, distance 136 between vehicle 110 and vehicle 114, and distance 138 between vehicle 112 and vehicle 114) based on the predicted agent trajectories 120-124, at each time step within the predicted trajectories.” Claim 9 Discloses: (Original) “The processing system of claim 1, wherein the one or more processors are configured to further execute the processor-executable instructions to cause the processing system to generate one or more actions to be performed by an autonomous vehicle based on the predicted object relationship graph.” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that (Paragraph [0118], Lines 4-13) “Although localization component 920, perception component 922, the prediction component 928, the planning component 930, the ML prediction model(s) 932, and/or system controller(s) 926 are illustrated as being stored in memory 918, any of these components may include processor -executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 940 or configured as part of computing device(s) 936.” Pronovost additionally teaches, (Abstract, Lines 1-2) “Techniques are discussed herein for training and executing machine learning (ML) prediction models used to control autonomous vehicles in driving environments,” and that, (Paragraph [0117], Lines 1-7) “The planning component 930 may receive a location and/or orientation of the vehicle 902 from the localization component 920, perception data from the perception component 922, and/or predicted trajectories from the ML prediction model(s) 932, and may determine instructions for controlling operation of the vehicle 902 based at least in part on any of this data.” Claim 10 Discloses (Original) “The processing system of claim 9, wherein, to generate the one or more actions, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to generate a planned movement path for the autonomous vehicle.” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that (Paragraph [0118], Lines 4-13) “Although localization component 920, perception component 922, the prediction component 928, the planning component 930, the ML prediction model(s) 932, and/or system controller(s) 926 are illustrated as being stored in memory 918, any of these components may include processor -executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 940 or configured as part of computing device(s) 936.” Pronovost additionally teaches, (Paragraph [0117], Lines 17-26) “the planning component 930 may comprise a nominal trajectory generation subcomponent that generates a set of candidate trajectories, and selects a trajectory for implementation by the drive systems(s) 914 based at least in part on determining a cost associated with a trajectory according to U.S. patent application Ser. No. 16/517,506, filed Jul. 19, 2019 and/or U.S. patent application Ser. No. 16/872,284, filed May 11, 2020, the entirety of which are incorporated herein for all purposes.” Claim 11 Discloses: (Currently Amended) “A processing system comprising: one or more memories comprising processor-executable instructions; and one or more processors configured to execute the processor-executable instructions and cause the processing system to:” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that (Paragraph [0118], Lines 4-13) “Although localization component 920, perception component 922, the prediction component 928, the planning component 930, the ML prediction model(s) 932, and/or system controller(s) 926 are illustrated as being stored in memory 918, any of these components may include processor -executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 940 or configured as part of computing device(s) 936.” “access a set of object detections, each respective object detection in the set of object detections corresponding to a respective object detected in an environment;” Pronovost teaches, (Paragraph [0014], Lines 1-8) “In some examples, the techniques discussed herein may be implemented in the context of a vehicle, such as an autonomous vehicle. When an autonomous vehicle is operating in an environment, the vehicle may use sensors to capture sensor data (e.g., image or video data, radar data, lidar data, sonar data, etc.) of the surrounding environment, and may analyze the sensor data to detect and classify objects within the environment.” Pronovost additionally teaches, (Paragraph [0013], Lines 4-7) “this application relates to training and executing ML prediction models configured to output joint trajectory predictions for multiple dynamic objects (or agents) in an environment.” “generate, based on the set of object detections, a first graph representation corresponding to a first moment in time, wherein: the first graph representation comprises a plurality of nodes and a plurality of edges, each respective node in the plurality of nodes corresponds to a respective object detection in the set of object detections,” Pronovost teaches, (Paragraph [0057], Lines 1-12) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation. As shown in this example, the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” “and each respective edge in the plurality of edges comprises one or more features generated based on a respective pair of nodes for the respective edge;” Pronovost teaches, (Paragraph [0057], Lines 14-16) “In various implementations, the GNN may be partially connected or fully connected with separate edge features associated with distinct pairs of nodes in the GNN.” “generate a set of output features based on processing the first graph representation, including the one or more features for each respective edge,” Pronovost teaches, (Paragraph [0056], Lines 3-8) “an autonomous vehicle (e.g., vehicle 202) may use a trained ML prediction model configured to receive input data representing the current state of the driving environment of the vehicle, and configured to output one or more predicted future states of the environment.” Pronovost additionally teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” “using a message passing network; generate a predicted object relationship graph based on processing the set of output features using a layer of a machine learning model” Pronovost teaches, (Paragraph [0057], Lines 18-25) “graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment. In these examples, the outputs of the GNN may represent a distribution of predicted future states of the various objects (e.g., agents) in the environment.” Pronovost additionally teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.” “… generate, based on the set of object detections, a second graph representation corresponding to a second moment in time subsequent to the first moment in time;” Pronovost teaches, (Paragraph [0026], Lines 10-16) “since the log data may comprise sequences including various vehicle states and/or object states over a period of time, the ground truth data may comprise the state of the vehicle that captured the log data and the states of any number of additional objects (e.g., agents and/or static objects) at each subsequent timestep from an input time.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” “and update one or more parameters of the message passing network and the layer of the machine learning model based on the predicted object relationship graph and the second graph representation.” Pronovost teaches, (Paragraph [0072]) “Based on receiving the input driving scene representation 510 from the ground truth environment data 508, the ML prediction model 502 may output a set of predicted trajectories 512. The predicted trajectories 512 may include a set of jointly determined predicted trajectories for two or more agents represented in the driving scene representation 510, for a future period of time within the driving scene. As described above, the predicted trajectories 512 may include a sequence of vehicle states (e.g., positions, poses, headings, yaws, steering angles, accelerations, etc.) for each agent in the scene representation 510. As shown in this example, the training component 102 may use an L2 loss component 514 to determine a first loss value for training the ML prediction model, based on the accuracy of the set of predicted trajectories 512. The L2 loss component 514 may be configured to compare the predicted trajectories 512 to the ground truth trajectories 516 corresponding to the same agents in the same driving scene, and may use an L2 loss function (although other types of loss components/functions may be used in other examples) to determine the L2 loss 518 to be propagated back into the ML prediction model 502 during the training.” “, wherein: to generate the predicted object relationship graph, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to, for at least a first pair of nodes in the first graph representation, the first pair of nodes corresponding to a first object and a second object in the environment, prune an edge connecting the first pair of nodes; to prune the edge,” Pronovost does not teach the node pruning as described in the preceding limitations. Huang does teach the preceding limitations, Huang teaches, (Page 15, Column 5, Lines 49-54) “In some examples, the search may comprise determining a directed graph between nodes of the sets of nodes. The directed graph may comprise a connection (e.g., edge) between a first node and a second node and/or weight (e.g., cost) associated with the connection,” and that, (Page 24, Column 23, Lines 16-23) “Determining whether a node is likely to result in a collision may comprise determining the shortest a distance 428 from either circle to a nearest static or dynamic object. Determining a distance to a dynamic object may comprise determining a distance to a portion of the environment data and/or cost map associated with a likelihood at or above a threshold likelihood,” as well as, (Page 24, Column 23, Lines 36-39) “Associating the likelihood of collision with the node may comprise modifying a weight associated with the node as part of the cost determination operation at operation 444 (e.g., by increasing the cost).” Huang additionally teaches, (Page 23, Column 22, Lines 59-67 & Page Column 23, Lines 1-5) “At operation 420, example process 400 may comprise determining whether a node 422 is likely to result in a collision, according to any of the techniques discussed herein. For example, this operation may comprise representing positioning the vehicle at the node 422 in the multivariate space. This may comprise representing the vehicle as two circles having diameters equal to a width of the autonomous vehicle, where a first circle may be centered at the front axle of the autonomous vehicle (i.e., fore circle 424) and a second circle may be centered at the rear axle of the autonomous vehicle (aft circle 426), and determining a distance between the representation and a nearest static object and/or a portion of the environment predicted as being occupied by a dynamic object at a future time,” and that, (Page 24, Column 23, Lines 27-35) “At operation 430, example process 400 may comprise pruning the node that may be likely to result in a collision and/or associating the likelihood of collision with the node, according to any of the techniques discussed herein. Pruning the node may comprise removing the node from the tree or graph or associating an indication with the node that the node will result in a collision. The collision indicator may prevent the node from being used as a parent for subsequent nodes and from being selected as part of the final path.” “the one or more processors are configured to execute the processor-executable instructions to cause the processing system to process output features of the first pair of nodes using the layer of the machine learning model; and the edge is pruned based on an effect the edge has upon the output features of the first pair of nodes;” Huang teaches, (Page 15, Column 6, Lines 40-46) “a processor of a first type (e.g., a graphics processing unit (GPU)) may determine the cost map, generate the nodes, prune the nodes, and/or determine the path and a processor of a second type may smooth the path generated by the GPU and/or determine a trajectory for controlling the vehicle based at least in part on the smooth path,” and that, (Page 14, Column 14, Lines 55-63) “Although localization component 226, perception component 228, planning component 230, guidance system 232, map(s) 234, and/or system controller(s) 236 are illustrated as being stored in memory 220, any of these components may include processor-executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 224 or configured as part of computing device(s) 214.” Huang additionally teaches, (Page 20, Column 15, Lines 4-14) “In some examples, an ML model may comprise a neural network. An exemplary neural network is a biologically inspired algorithm which passes input data through a series of connected layers to produce an output. Each layer in a neural network can also comprise another neural network, or can comprise any number of layers (whether convolutional or not). As can be understood in the context of this disclosure, a neural network can utilize machine-learning, which can refer to a broad class of such algorithms in which an output is generated based on learned parameters,” and that, (Page 24, Column 24, Lines 44-45) “FIG. 4C depicts an example of a pruned layer of nodes 446.” Therefore, it would have been obvious to a person of ordinary skill in the art to combine the graph neural network which includes paired vehicle nodes which themselves include relative position measurements, as taught by Pronovost, with the node pruning system of Huang which measures relative positions of nodes and determines if the distance could introduce risk of a collision as taught by Huang, in order to yield predictable results. Combining the references would yield the benefits of implementing the node pruning operation to avoiding collisions during, for example, a model of an autonomous vehicle driving situation. As Huang describes, (Page 16, Column 7, Lines 4-8) “The techniques discussed herein may improve the safety of a vehicle by improving the vehicle's ability to predict movement and/or behavior of objects in the vehicle's surroundings and plan a path for the vehicle that is collision-free and economical.” Claim 12 Discloses: (Original) “The processing system of claim 11, wherein, to generate the first graph representation, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to generate, for each respective node in the first graph representation, a respective feature vector describing properties of a respective object in the environment.” Pronovost teaches, (Paragraph [0057], Lines 1-6) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation,” and that, (Paragraph [0040], Lines 1-4) “operation 140 may correspond to a training stage during which the training component 102 may perform backpropagation to modify agent feature vectors, edge features, etc.” Claim 13 Discloses: (Original) “The processing system of claim 12, wherein the properties of the respective object comprise at least one of:(i) a position of the respective object,(ii) a size of the respective object,(iii) an orientation of the respective object,(iv) a texture of the respective object,(v) a vulnerability measure of the respective object,(vi) a visibility of the respective object, (vii) a velocity of the respective object,(viii) an acceleration of the respective object,(ix) contents of the respective object, or (x) a status of the respective object.” Pronovost teaches, (Paragraph [0046], Lines 15-17) “a prediction system that may predict future positions, velocities, and/or accelerations of objects in the environment, ” and that, (Paragraph [0115], Lines 1-12) “In some examples, sensor data and/or perception data may be used to generate an environment state that represents a current state of the environment. For example, the environment state may be a data structure that identifies object data (e.g., object position, area of environment occupied by object, object heading, object velocity, historical object data), environment layout data (e.g., a map or sensor-generated layout of the environment), environment condition data (e.g., the location and/or area associated with environmental features, such as standing water or ice, whether it's raining, visibility metric), sensor data (e.g., an image, point cloud), etc.” Claim 14 Discloses: (Original) “The processing system of claim 11, wherein, to generate the first graph representation, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to, for at least a first pair of nodes in the first graph representation, the first pair of nodes corresponding to a first object and a second object in the environment: generate a first edge connecting the first pair of nodes; and generate a first feature vector describing one or more relationships between the first and second objects.” Pronovost teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN,” and that, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” Claim 15 Discloses: (Original) “The processing system of claim 14, wherein the one or more relationships between the first and second objects comprise at least one of:(i) relative distance between the first and second objects, (ii) relative velocity between the first and second objects,(iii) relative acceleration between the first and second objects,(iv) relative position between the first and second objects,(v) relative angle between the first and second objects,(vi) semantic similarity of the first and second objects, or (vii) geometric similarity of the first and second objects.” Pronovost teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN,” and that, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” Claim 16 Discloses: (Original) “The processing system of claim 11, wherein, to generate the set of output features, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to,” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that (Paragraph [0118], Lines 4-13) “Although localization component 920, perception component 922, the prediction component 928, the planning component 930, the ML prediction model(s) 932, and/or system controller(s) 926 are illustrated as being stored in memory 918, any of these components may include processor -executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 940 or configured as part of computing device(s) 936.” “for a first node in the first graph representation: generate, for each respective edge connecting a respective neighbor node to the first node, a respective message vector based on a feature vector of the first node, a respective feature vector of the respective neighbor node,” Pronovost teaches, (Paragraph [0057], Lines 1-6) “executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” Pronovost additionally teaches, (Paragraph [0066], Lines 1 and 13-16) “FIG. 4B depicts another example … the predicted futures 416, 418, 420, and 422 may be determined using the features associated with a node representing a specific vehicle (e.g., vehicle 406, 408, or 410)” “and a respective feature vector of the respective edge;” Pronovost teaches, (Paragraph [0066], Lines 16-19) “and the information encoded (e.g., into the edge features between the nodes of a GNN) representing the relative information of the additional vehicles in the driving environment.” Pronovost additionally teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” Pronovost additionally teaches, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” “and generate a first output feature, wherein, to generate the first output feature, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to aggregate the respective message vectors for the first node” A person of ordinary skill in the art would understand that in order to update a node’s vector representation in a Message Passing Neural Network, message vectors are aggregated. “using a graph convolutional layer of the machine learning model.” Pronovost teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.” Claim 17 Discloses: (Currently Amended) “The processing system of claim 11, wherein: to generate the predicted object relationship graph, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to, for at least a second pair of nodes in the first graph representation, the second pair of nodes corresponding to a third object and a fourth object in the environment, predict an object relationship between the third and fourth objects; and to predict the object relationship, the one or more processors are configured to execute the processor-executable instructions to cause the processing system to process output features of the second pair of nodes using the layer of the machine learning model.” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations.” Pronovost additionally teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.,” and that, (Paragraph [0087], Lines 1-11) “In some cases, the input data to the ML trajectory prediction model 702 and the second ML trajectory prediction model 704 (e.g., scene representation 708) may include feature data representing the state of each agent and/or object in the environment, along with map data of the environment. Within the ML trajectory prediction model 702, one or more intermediate layers and/or components may determine the relative distances (and/or other relative state data) between the agents, and may use the relative data in a subsequent layer or component determining the overall model output (e.g., the set of predicted trajectories). Pronovost additionally teaches, (Paragraph [0057], Lines 1-12) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation. As shown in this example, the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” Therefore, Pronovost teaches determining an overall model output comprising the sets of vehicle trajectories, the prediction model of which stores offset data between multiple paired vehicle nodes. Pronovost does not explicitly designate third and fourth objects that have their own unique pairing that is different than that of a first and second object. However, a person of ordinary skill in the art would easily conceive before the effective filling date of the claimed invention an overall model being capable of comprising an explicit pairing between a third and fourth actor, especially in view of FIG. 1, where four vehicles have their relative distances measured. As Pronovost describes, (Paragraph [0036], Lines 11-21) “the training component 102 may analyze the driving paths predicted agent trajectories 120-124 and/or the relative distances between each pair of agents (e.g., vehicles 110-114) within the predicted agent trajectories 120-124. For instance, as shown in box 132, the training component 102 may determine the distance between each pair of agents (e.g., distance 134 between vehicle 110 and vehicle 112, distance 136 between vehicle 110 and vehicle 114, and distance 138 between vehicle 112 and vehicle 114) based on the predicted agent trajectories 120-124, at each time step within the predicted trajectories.” Claim 19 Discloses: (Currently Amended) “A processor-implemented method of machine learning, comprising: accessing a set of object detections, each respective object detection in the set of object detections corresponding to a respective object detected in an environment;” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that, (Paragraph [0014], Lines 1-8) “In some examples, the techniques discussed herein may be implemented in the context of a vehicle, such as an autonomous vehicle. When an autonomous vehicle is operating in an environment, the vehicle may use sensors to capture sensor data (e.g., image or video data, radar data, lidar data, sonar data, etc.) of the surrounding environment, and may analyze the sensor data to detect and classify objects within the environment.” Pronovost additionally teaches, (Paragraph [0013], Lines 4-7) “this application relates to training and executing ML prediction models configured to output joint trajectory predictions for multiple dynamic objects (or agents) in an environment.” “generating, based on the set of object detections, a graph representation comprising a plurality of nodes and a plurality of edges, wherein: each respective node in the plurality of nodes corresponds to a respective object detection in the set of object detections,” Pronovost teaches, (Paragraph [0057], Lines 1-12) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation. As shown in this example, the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” “and each respective edge in the plurality of edges comprises one or more features generated based on a respective pair of nodes for the respective edge;” Pronovost teaches, (Paragraph [0057], Lines 14-16) “In various implementations, the GNN may be partially connected or fully connected with separate edge features associated with distinct pairs of nodes in the GNN.” “generating a set of output features based on processing the graph representation, including the one or more features for each respective edge, using a trained message passing network;” Pronovost teaches, (Paragraph [0056], Lines 3-8) “an autonomous vehicle (e.g., vehicle 202) may use a trained ML prediction model configured to receive input data representing the current state of the driving environment of the vehicle, and configured to output one or more predicted future states of the environment.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” “and generating a predicted object relationship graph based on processing the set of output features using a layer of a trained machine learning model” Pronovost teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.” “, wherein: generating the predicted object relationship graph comprises, for at least a first pair of nodes in the graph representation, the first pair of nodes corresponding to a first object and a second object in the environment, pruning an edge connecting the first pair of nodes based on processing output features of the first pair of nodes using the layer of the trained machine learning model; and the edge is pruned based on an effect the edge has upon the output features of the first pair of nodes.” Pronovost does not teach the node pruning as described in the preceding limitations. Huang does teach the preceding limitations and is directed to “an autonomous vehicle guidance system that generates a path for controlling an autonomous vehicle based at least in part on a static object map and/or one or more dynamic object maps.” Huang teaches, (Page 15, Column 5, Lines 49-54) “In some examples, the search may comprise determining a directed graph between nodes of the sets of nodes. The directed graph may comprise a connection (e.g., edge) between a first node and a second node and/or weight (e.g., cost) associated with the connection,” and that, (Page 24, Column 23, Lines 16-23) “Determining whether a node is likely to result in a collision may comprise determining the shortest a distance 428 from either circle to a nearest static or dynamic object. Determining a distance to a dynamic object may comprise determining a distance to a portion of the environment data and/or cost map associated with a likelihood at or above a threshold likelihood,” as well as, (Page 24, Column 23, Lines 36-39) “Associating the likelihood of collision with the node may comprise modifying a weight associated with the node as part of the cost determination operation at operation 444 (e.g., by increasing the cost).” Huang additionally teaches, (Page 23, Column 22, Lines 59-67 & Page Column 23, Lines 1-5) “At operation 420, example process 400 may comprise determining whether a node 422 is likely to result in a collision, according to any of the techniques discussed herein. For example, this operation may comprise representing positioning the vehicle at the node 422 in the multivariate space. This may comprise representing the vehicle as two circles having diameters equal to a width of the autonomous vehicle, where a first circle may be centered at the front axle of the autonomous vehicle (i.e., fore circle 424) and a second circle may be centered at the rear axle of the autonomous vehicle (aft circle 426), and determining a distance between the representation and a nearest static object and/or a portion of the environment predicted as being occupied by a dynamic object at a future time,” and that, (Page 24, Column 23, Lines 27-35) “At operation 430, example process 400 may comprise pruning the node that may be likely to result in a collision and/or associating the likelihood of collision with the node, according to any of the techniques discussed herein. Pruning the node may comprise removing the node from the tree or graph or associating an indication with the node that the node will result in a collision. The collision indicator may prevent the node from being used as a parent for subsequent nodes and from being selected as part of the final path.” Therefore, it would have been obvious to a person of ordinary skill in the art to combine the graph neural network which includes paired vehicle nodes which themselves include relative position measurements, as taught by Pronovost, with the node pruning system of Huang which measures relative positions of nodes and determines if the distance could introduce risk of a collision as taught by Huang, in order to yield predictable results. Combining the references would yield the benefits of implementing the node pruning operation to avoiding collisions during, for example, a model of an autonomous vehicle driving situation. As Huang describes, (Page 16, Column 7, Lines 4-8) “The techniques discussed herein may improve the safety of a vehicle by improving the vehicle's ability to predict movement and/or behavior of objects in the vehicle's surroundings and plan a path for the vehicle that is collision-free and economical.” Claim 20 Discloses: (Original) “The processor-implemented method of claim 19, wherein generating the graph representation comprises generating, for each respective node in the graph representation, a respective feature vector describing properties of a respective object in the environment.” Pronovost teaches, (Paragraph [0057], Lines 1-6) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation,” and that, (Paragraph [0040], Lines 1-4) “operation 140 may correspond to a training stage during which the training component 102 may perform backpropagation to modify agent feature vectors, edge features, etc.” Claim 21 Discloses: (Original) “The processor-implemented method of claim 20, wherein the properties of the respective object comprise at least one of:(i) a position of the respective object,(ii) a size of the respective object,(iii) an orientation of the respective object,(iv) a texture of the respective object,(v) a vulnerability measure of the respective object,(vi) a visibility of the respective object, (vii) a velocity of the respective object,(viii) an acceleration of the respective object,(ix) contents of the respective object, or (x) a status of the respective object.” Pronovost teaches, (Paragraph [0046], Lines 15-17) “a prediction system that may predict future positions, velocities, and/or accelerations of objects in the environment, ” and that, (Paragraph [0115], Lines 1-12) “In some examples, sensor data and/or perception data may be used to generate an environment state that represents a current state of the environment. For example, the environment state may be a data structure that identifies object data (e.g., object position, area of environment occupied by object, object heading, object velocity, historical object data), environment layout data (e.g., a map or sensor-generated layout of the environment), environment condition data (e.g., the location and/or area associated with environmental features, such as standing water or ice, whether it's raining, visibility metric), sensor data (e.g., an image, point cloud), etc.” Claim 22 Discloses: (Original) “The processor-implemented method of claim 19, wherein generating the graph representation comprises, for at least a first pair of nodes in the graph representation, the first pair of nodes corresponding to a first object and a second object in the environment: generating a first edge connecting the first pair of nodes; and generating a first feature vector describing one or more relationships between the first and second objects.” Pronovost teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN,” and that, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” Claim 23 Discloses: (Original) “The processor-implemented method of claim 22, wherein the one or more relationships between the first and second objects comprise at least one of:(i) relative distance between the first and second objects, (ii) relative velocity between the first and second objects,(iii) relative acceleration between the first and second objects,(iv) relative position between the first and second objects,(v) relative angle between the first and second objects,(vi) semantic similarity of the first and second objects, or (vii) geometric similarity of the first and second objects.” Pronovost teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN,” and that, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” Claim 24 Discloses: (Original) “The processor-implemented method of claim 19, wherein generating the set of output features comprises, for a first node in the graph representation: generating, for each respective edge connecting a respective neighbor node to the first node, a respective message vector based on a feature vector of the first node, a respective feature vector of the respective neighbor node,” Pronovost teaches, (Paragraph [0057], Lines 1-6) “executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” Pronovost additionally teaches, (Paragraph [0066], Lines 1 and 13-16) “FIG. 4B depicts another example … the predicted futures 416, 418, 420, and 422 may be determined using the features associated with a node representing a specific vehicle (e.g., vehicle 406, 408, or 410)” “and a respective feature vector of the respective edge;” Pronovost teaches, (Paragraph [0066], Lines 16-19) “and the information encoded (e.g., into the edge features between the nodes of a GNN) representing the relative information of the additional vehicles in the driving environment.” Pronovost additionally teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” Pronovost additionally teaches, (Paragraph [0088], Lines 17-21) “for prediction models using top-down representations of driving environments and/or scene encodings, a masking component may modify or remove feature vectors representing distances between agents, etc.” “and generating a first output feature based on aggregating the respective message vectors for the first node” A person of ordinary skill in the art would understand that in order to update a node’s vector representation in a Message Passing Neural Network, message vectors are aggregated. “using a graph convolutional layer of the trained machine learning model.” Pronovost teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.” Claim 25 Discloses: (Currently Amended) “The processor-implemented method of claim 19, wherein generating the predicted object relationship graph comprises, for at least a second pair of nodes in the graph representation, the second pair of nodes corresponding to a first fourth object in the environment, predicting an object relationship between the third and fourth objects based on processing output features of the second pair of nodes using the layer of the trained machine learning model.” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations.” Pronovost additionally teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.,” and that, (Paragraph [0087], Lines 1-11) “In some cases, the input data to the ML trajectory prediction model 702 and the second ML trajectory prediction model 704 (e.g., scene representation 708) may include feature data representing the state of each agent and/or object in the environment, along with map data of the environment. Within the ML trajectory prediction model 702, one or more intermediate layers and/or components may determine the relative distances (and/or other relative state data) between the agents, and may use the relative data in a subsequent layer or component determining the overall model output (e.g., the set of predicted trajectories).” Pronovost additionally teaches, (Paragraph [0057], Lines 1-12) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation. As shown in this example, the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” Therefore, Pronovost teaches determining an overall model output comprising the sets of vehicle trajectories, the prediction model of which stores offset data between multiple paired vehicle nodes. Pronovost does not explicitly designate third and fourth objects that have their own unique pairing that is different than that of a first and second object. However, a person of ordinary skill in the art would easily conceive before the effective filling date of the claimed invention an overall model being capable of comprising an explicit pairing between a third and fourth actor, especially in view of FIG. 1, where four vehicles have their relative distances measured. As Pronovost describes, (Paragraph [0036], Lines 11-21) “the training component 102 may analyze the driving paths predicted agent trajectories 120-124 and/or the relative distances between each pair of agents (e.g., vehicles 110-114) within the predicted agent trajectories 120-124. For instance, as shown in box 132, the training component 102 may determine the distance between each pair of agents (e.g., distance 134 between vehicle 110 and vehicle 112, distance 136 between vehicle 110 and vehicle 114, and distance 138 between vehicle 112 and vehicle 114) based on the predicted agent trajectories 120-124, at each time step within the predicted trajectories.” Claim 27 Discloses (Original) “The processor-implemented method of claim 19, further comprising generating one or more actions to be performed by an autonomous vehicle based on the predicted object relationship graph.” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that (Paragraph [0118], Lines 4-13) “Although localization component 920, perception component 922, the prediction component 928, the planning component 930, the ML prediction model(s) 932, and/or system controller(s) 926 are illustrated as being stored in memory 918, any of these components may include processor -executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 940 or configured as part of computing device(s) 936.” Pronovost additionally teaches, (Abstract, Lines 1-2) “Techniques are discussed herein for training and executing machine learning (ML) prediction models used to control autonomous vehicles in driving environments,” and that, (Paragraph [0117], Lines 1-7) “The planning component 930 may receive a location and/or orientation of the vehicle 902 from the localization component 920, perception data from the perception component 922, and/or predicted trajectories from the ML prediction model(s) 932, and may determine instructions for controlling operation vehicle 902 based at least in part on any of this data.” Claim 28 Discloses (Original) “The processor-implemented method of claim 27, wherein generating the one or more actions comprises generating a planned movement path for the autonomous vehicle.” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that (Paragraph [0118], Lines 4-13) “Although localization component 920, perception component 922, the prediction component 928, the planning component 930, the ML prediction model(s) 932, and/or system controller(s) 926 are illustrated as being stored in memory 918, any of these components may include processor -executable instructions, machine-learned model(s) (e.g., a neural network), and/or hardware and all or part of any of these components may be stored on memory 940 or configured as part of computing device(s) 936.” Pronovost additionally teaches, (Paragraph [0117], Lines 17-26) “the planning component 930 may comprise a nominal trajectory generation subcomponent that generates a set of candidate trajectories, and selects a trajectory for implementation by the drive systems(s) 914 based at least in part on determining a cost associated with a trajectory according to U.S. patent application Ser. No. 16/517,506, filed Jul. 19, 2019 and/or U.S. patent application Ser. No. 16/872,284, filed May 11, 2020, the entirety of which are incorporated herein for all purposes.” Claim 29 Discloses: (Currently Amended) “A processor-implemented method of machine learning, comprising: accessing a set of object detections, each respective object detection in the set of object detections corresponding to a respective object detected in an environment;” Pronovost teaches, (Paragraph [0124], Lines 1-4) “A system comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations,” and that, (Paragraph [0014], Lines 1-8) “In some examples, the techniques discussed herein may be implemented in the context of a vehicle, such as an autonomous vehicle. When an autonomous vehicle is operating in an environment, the vehicle may use sensors to capture sensor data (e.g., image or video data, radar data, lidar data, sonar data, etc.) of the surrounding environment, and may analyze the sensor data to detect and classify objects within the environment.” Pronovost additionally teaches, (Paragraph [0013], Lines 4-7) “this application relates to training and executing ML prediction models configured to output joint trajectory predictions for multiple dynamic objects (or agents) in an environment.” “generating, based on the set of object detections, a first graph representation corresponding to a first moment in time, wherein; the first graph representation comprises a plurality of nodes and a plurality of edges, each representative node in the plurality of nodes corresponds to a respective object detection in the set of object detections,” Pronovost teaches, (Paragraph [0057], Lines 1-12) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation. As shown in this example, the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” “and each respective edge in the plurality of edges comprises one or more features generated based on a respective pair of nodes for the respective edge;” Pronovost teaches, (Paragraph [0057], Lines 14-16) “In various implementations, the GNN may be partially connected or fully connected with separate edge features associated with distinct pairs of nodes in the GNN.” “generating a set of output features based on processing the first graph representation including the one or more features for each respective edge, using a message passing network;” Pronovost teaches, (Paragraph [0056], Lines 3-8) “an autonomous vehicle (e.g., vehicle 202) may use a trained ML prediction model configured to receive input data representing the current state of the driving environment of the vehicle, and configured to output one or more predicted future states of the environment.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” “generating a predicted object relationship graph based on processing the set of output features using a layer of a machine learning model” Pronovost teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.” “… generating, based on the set of object detection, a second graph representation corresponding to a second moment in time subsequent to the first moment in time;” Pronovost teaches, (Paragraph [0026], Lines 10-16) “since the log data may comprise sequences including various vehicle states and/or object states over a period of time, the ground truth data may comprise the state of the vehicle that captured the log data and the states of any number of additional objects (e.g., agents and/or static objects) at each subsequent timestep from an input time.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” “and updating one or more parameters of the message passing network and the layer of the machine learning model based on the predicted object relationship graph and the second graph representation.” Pronovost teaches, (Paragraph [0072]) “Based on receiving the input driving scene representation 510 from the ground truth environment data 508, the ML prediction model 502 may output a set of predicted trajectories 512. The predicted trajectories 512 may include a set of jointly determined predicted trajectories for two or more agents represented in the driving scene representation 510, for a future period of time within the driving scene. As described above, the predicted trajectories 512 may include a sequence of vehicle states (e.g., positions, poses, headings, yaws, steering angles, accelerations, etc.) for each agent in the scene representation 510. As shown in this example, the training component 102 may use an L2 loss component 514 to determine a first loss value for training the ML prediction model, based on the accuracy of the set of predicted trajectories 512. The L2 loss component 514 may be configured to compare the predicted trajectories 512 to the ground truth trajectories 516 corresponding to the same agents in the same driving scene, and may use an L2 loss function (although other types of loss components/functions may be used in other examples) to determine the L2 loss 518 to be propagated back into the ML prediction model 502 during the training.” “, wherein: generating the predicted object relationship graph comprises, for at least a first pair of nodes in the first graph representation, the first pair of nodes corresponding to a first object and a second object in the environment, pruning an edge connecting the first pair of nodes based on processing features of the first pair of nodes using the layer of the machine learning model; and the edge is pruned based on an effect the edge has upon the output features of the first pair of nodes; Pronovost does not teach the node pruning as described in the preceding limitations. Huang does teach the preceding limitations and is directed to “an autonomous vehicle guidance system that generates a path for controlling an autonomous vehicle based at least in part on a static object map and/or one or more dynamic object maps.” Huang teaches, (Page 15, Column 5, Lines 49-54) “In some examples, the search may comprise determining a directed graph between nodes of the sets of nodes. The directed graph may comprise a connection (e.g., edge) between a first node and a second node and/or weight (e.g., cost) associated with the connection,” and that, (Page 24, Column 23, Lines 16-23) “Determining whether a node is likely to result in a collision may comprise determining the shortest a distance 428 from either circle to a nearest static or dynamic object. Determining a distance to a dynamic object may comprise determining a distance to a portion of the environment data and/or cost map associated with a likelihood at or above a threshold likelihood,” as well as, (Page 24, Column 23, Lines 36-39) “Associating the likelihood of collision with the node may comprise modifying a weight associated with the node as part of the cost determination operation at operation 444 (e.g., by increasing the cost).” Huang additionally teaches, (Page 23, Column 22, Lines 59-67 & Page Column 23, Lines 1-5) “At operation 420, example process 400 may comprise determining whether a node 422 is likely to result in a collision, according to any of the techniques discussed herein. For example, this operation may comprise representing positioning the vehicle at the node 422 in the multivariate space. This may comprise representing the vehicle as two circles having diameters equal to a width of the autonomous vehicle, where a first circle may be centered at the front axle of the autonomous vehicle (i.e., fore circle 424) and a second circle may be centered at the rear axle of the autonomous vehicle (aft circle 426), and determining a distance between the representation and a nearest static object and/or a portion of the environment predicted as being occupied by a dynamic object at a future time,” and that, (Page 24, Column 23, Lines 27-35) “At operation 430, example process 400 may comprise pruning the node that may be likely to result in a collision and/or associating the likelihood of collision with the node, according to any of the techniques discussed herein. Pruning the node may comprise removing the node from the tree or graph or associating an indication with the node that the node will result in a collision. The collision indicator may prevent the node from being used as a parent for subsequent nodes and from being selected as part of the final path.” Huang additionally teaches, (Page 20, Column 15, Lines 4-14) “In some examples, an ML model may comprise a neural network. An exemplary neural network is a biologically inspired algorithm which passes input data through a series of connected layers to produce an output. Each layer in a neural network can also comprise another neural network, or can comprise any number of layers (whether convolutional or not). As can be understood in the context of this disclosure, a neural network can utilize machine-learning, which can refer to a broad class of such algorithms in which an output is generated based on learned parameters,” and that, (Page 24, Column 24, Lines 44-45) “FIG. 4C depicts an example of a pruned layer of nodes 446.” Therefore, it would have been obvious to a person of ordinary skill in the art to combine the graph neural network which includes paired vehicle nodes which themselves include relative position measurements, as taught by Pronovost, with the node pruning system of Huang which measures relative positions of nodes and determines if the distance could introduce risk of a collision as taught by Huang, in order to yield predictable results. Combining the references would yield the benefits of implementing the node pruning operation to avoiding collisions during, for example, a model of an autonomous vehicle driving situation. As Huang describes, (Page 16, Column 7, Lines 4-8) “The techniques discussed herein may improve the safety of a vehicle by improving the vehicle's ability to predict movement and/or behavior of objects in the vehicle's surroundings and plan a path for the vehicle that is collision-free and economical.” Claim 30 Discloses: (Currently Amended) “The processor-implemented method of claim 29, wherein: generating the set of output features comprises, for a first node in the first graph representation: generating, for each respective edge connecting a respective neighbor node to the first node, a respective message vector based on a feature vector of the first node, a respective feature vector of the respective neighbor node,” Pronovost teaches, (Paragraph [0057], Lines 1-6) “executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation.” Pronovost additionally teaches, (Paragraph [0057], Lines 16-22) “Machine-learning-based inference operations, such as, for example, graph message passing, may be performed to update the state of the GNN, including updating nodes and/or edge features, based on internal inputs determined from the GNN itself and/or based on updated observations perceived by the autonomous vehicle in the environment.” Pronovost additionally teaches, (Paragraph [0066], Lines 1 and 13-16) “FIG. 4B depicts another example … the predicted futures 416, 418, 420, and 422 may be determined using the features associated with a node representing a specific vehicle (e.g., vehicle 406, 408, or 410)” “and a respective feature vector of the respective edge;” Pronovost teaches, (Paragraph [0066], Lines 16-19) “and the information encoded (e.g., into the edge features between the nodes of a GNN) representing the relative information of the additional vehicles in the driving environment.” Pronovost additionally teaches, (Paragraph [0057], Lines 7-12) “the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” “and generating a first output feature based on aggregating the respective message vectors for the first node” A person of ordinary skill in the art would understand that in order to update a node’s vector representation in a Message Passing Neural Network, message vectors are aggregated. “using a graph convolutional layer of the machine learning model;” Pronovost teaches, (Paragraph [0054], Lines 9-13) “The training component 232 may include any number of machine learning components, including but not limited to machine learning algorithms, graphical neural networks (GNNs), convolutional layers, encoding and transformation components, etc.” “and generating the predicted object relationship graph comprises, for at least a second pair of nodes in the first graph representation, the second pair of nodes corresponding to a third object and a fourth object in the environment, predicting an object relationship between the third and fourth objects based on processing output features of the second pair of nodes using the layer of the machine learning model.” Pronovost teaches, (Paragraph [0087], Lines 1-11) “In some cases, the input data to the ML trajectory prediction model 702 and the second ML trajectory prediction model 704 (e.g., scene representation 708) may include feature data representing the state of each agent and/or object in the environment, along with map data of the environment. Within the ML trajectory prediction model 702, one or more intermediate layers and/or components may determine the relative distances (and/or other relative state data) between the agents, and may use the relative data in a subsequent layer or component determining the overall model output (e.g., the set of predicted trajectories).” Pronovost additionally teaches, (Paragraph [0057], Lines 1-12) “In some examples, executing the prediction model may include determining and vectorizing elements of the environment from a feature map associated with the environment, as well as the objects (e.g., agents) perceived in the environment, and representing the vectorized environment elements and objects within a graph structure representation. As shown in this example, the prediction model may use a graph neural network (GNN) including a combination of vehicle nodes and/or object nodes, and including an edge network storing offset data (e.g., relative positions, relative poses, relative speeds, relative accelerations, relative sizes, etc.) between pairs of objects in the GNN.” Therefore, Pronovost teaches determining an overall model output comprising the sets of vehicle trajectories, the prediction model of which stores offset data between multiple paired vehicle nodes. Pronovost does not explicitly designate third and fourth objects that have their own unique pairing that is different than that of a first and second object. However, a person of ordinary skill in the art would easily conceive before the effective filling date of the claimed invention an overall model being capable of comprising an explicit pairing between a third and fourth actor, especially in view of FIG. 1, where four vehicles have their relative distances measured. As Pronovost describes, (Paragraph [0036], Lines 11-21) “the training component 102 may analyze the driving paths predicted agent trajectories 120-124 and/or the relative distances between each pair of agents (e.g., vehicles 110-114) within the predicted agent trajectories 120-124. For instance, as shown in box 132, the training component 102 may determine the distance between each pair of agents (e.g., distance 134 between vehicle 110 and vehicle 112, distance 136 between vehicle 110 and vehicle 114, and distance 138 between vehicle 112 and vehicle 114) based on the predicted agent trajectories 120-124, at each time step within the predicted trajectories.” RELEVANT, BUT NOT CITED ART The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Urtasun et al. (US 2021/0009163 A1) teaches, (Paragraph [0027], Lines 1-7) “As indicated above the graph neural network can be described as “spatially aware.” For example, messages can be passed between the nodes in a manner that captures spatial relationships between the actors such that interactions between the nodes can be better modeled. Messages can passed between the nodes (e.g., along the edges of the GNN) to update respective node states of the nodes.” Vinay et al. (US 2024/0135197 A1) discloses, (Abstract) “expanding a seed scene using proposals from a generative model of scene graphs. The method may include clustering subgraphs according to respective one or more maximal connected subgraphs of a scene graph. The scene graph includes a plurality of nodes and edges. The method also includes generating a scene sequence for the scene graph based on the clustered subgraphs. A first machine learning model determines a predicted node in response to receiving the scene sequence. A second machine learning model determines a predicted edge in response to receiving the scene sequence and the predicted node. A scene graph is output according to the predicted node and the predicted edge.” Cserna et al. (US 2022/0290997 A1) teaches, (Paragraph [0117], Lines 1-3) “In comparison with the graph 1302, the graph 1312 has been pruned or reduced to include edges that are determined to be relevant to the vehicle or otherwise useful.” Huang et al. (US 11,875,678 B2) teaches, (Abstract) “An autonomous vehicle guidance system that generates a path for controlling an autonomous vehicle based at least in part on a data structure generated based at least in part on sensor data that may indicate occupied space in an environment surrounding an autonomous vehicle. The guidance system may receive a grid and generate a grid associated with the grid and the data structure. The guidance system may additionally or alternatively sub-sample the grid (latterly and/or longitudinally) dynamically based at least in part on characteristics determined from the data structure. The guidance system may identify a path based at least in part on a set of precomputed motion primitives, costs associated therewith, and/or a heuristic cost plot that indicates a cheapest cost to move from one pose to another.” Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER V. GENTILE whose telephone number is (703)756-1501. The examiner can normally be reached Monday - Friday 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito R. Robinson can be reached at (571)270-3921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER V GENTILE/Examiner, Art Unit 3664 /KITO R ROBINSON/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Apr 25, 2025
Non-Final Rejection — §103
Jul 10, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Nov 17, 2025
Response after Non-Final Action
Dec 09, 2025
Request for Continued Examination
Dec 22, 2025
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596381
TRAVEL MAP CREATING APPARATUS, TRAVEL MAP CREATING METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12584747
STATE ESTIMATION DEVICE AND STATE ESTIMATION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12560939
DRIVING ROBOT GENERATING DRIVING MAP AND CONTROLLING METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12545186
DISPLAY CONTROL DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12517512
CONTROL METHOD FOR CONTROLLING DELIVERY SYSTEM
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
88%
With Interview (+12.6%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month