DETAILED ACTION
This action is in reply to the amendments and arguments filed December 8th, 2025. Claims 1, 2, 4-7, and 9-12 are currently pending.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7, and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over previously cited of record Guo et al. (US Pub. No. 20230063960 A1), herein after Guo, in further view of previously cited of record Wongpiromsarn et al. (US Pub. No. 20200189575 A1), herein after Wongpiromsarn, and further in view of Lyu et al. (US Pub. No. 20240176989 A1), herein after Lyu.
Regarding claim 1, Guo teaches [a] method for evaluating a traffic scene with several road users, the method comprising (Guo: Para. 0027, teaching that the invention involves method for evaluating a traffic scene involving multiple vehicles): providing input data which results from recording the traffic scene and which specifies the road users and associated features, wherein the features are based at least in part on current… states of the road users (Guo: Para. 0102, teaching that data regarding the current location and speed of vehicles are inputted into the invention); providing a representation of the road users and their relationships to each other in the traffic scene and an infrastructure of the traffic scene, wherein the relationships are specified based on the features, wherein the infrastructure is represented by a parameterized representation, wherein the representation comprises a plurality of nodes of a graph representing the current… states of the road users, and wherein the representation comprises a plurality of edges of the graph explicitly specifying the relationships of the road users to each other (Guo: Para. 0028, 0049, and 0053, teaching that the information on the vehicles are converted to nodes which are graphed to show a representation between the vehicles including their locations).
Guo is silent to the features being based on the past state of the road users; and predicting a future development of the traffic scene using a graph neural network configured to receive the graph as input, the predicting the future development of the traffic scene including predicting in a behavior of all of the road users represented in the graph.
In a similar field, Wongpiromsarn teaches providing input data which results from recording of the traffic scene and which specifies the road users and associated features, wherein the features are based at least in part on current and past states of the road users (Wongpiromsarn: Para. 0061, teaching that the use of current and historical data is utilized to predict future information on a vehicle) for the benefit of reducing the possibilities of collision between vehicles.
It would have been obvious to one ordinarily skilled in the art before the filing of the application to modify the traffic monitoring of multiple vehicles from Guo to predict the future states of the vehicles based on their past states, as taught by Wongpiromsarn, for the benefit of reducing the possibilities of collision between vehicles.
Guo in view of Wongpiromsarn are silent to predicting a future development of the traffic scene using a graph neural network configured to receive the graph as input, the predicting the future development of the traffic scene including predicting in a behavior of all of the road users represented in the graph.
In a similar field, Lyu teaches predicting a future development of the traffic scene using a graph neural network configured to receive the graph as input, the predicting the future development of the traffic scene including predicting in a behavior of all of the road users represented in the graph (Lyu: Para. 0038 and 0039, teaching the use of a recurrent neural network and a graph neural network integrated together to extract and predict the trajectories of the vehicles in a traffic scene and how the behaviors of each vehicle impacts the other vehicles) for the benefit of more accurately predicting the future trajectories of the vehicles.
It would have been obvious to one ordinarily skilled in the art before the filing of the application to modify the traffic monitoring of multiple vehicles from Guo in view of Wongpiromsarn to utilize a graph neural network to predict the future states of the vehicles, as taught by Lyu, for the benefit of more accurately predicting the future trajectories of the vehicles.
Regarding claim 7, Guo, Wongpiromsarn, and Lyu remain as applied as in claim 1, and Guo goes on to further teach [t]he method according to claim 1, wherein: behavior planning of an at least partially autonomous robot is carried out based on the provided result of the prediction, and the robot is a part of the traffic scene (Guo: Para. 0027, teaching that the invention involves method for evaluating a traffic scene involving multiple vehicles).
Regarding claim 10, Guo, Wongpiromsarn, and Lyu remain as applied as in claim 1, and Guo goes on to further teach [a] device for data processing configured to carry out the method according to claim 1 (Guo: Para. 0099, teaching that the invention is implemented using a computer readable medium programmed to perform the steps of the invention).
Regarding claim 11, Guo, Wongpiromsarn, and Lyu remain as applied as in claim 1, and Guo goes on to further teach [a] computer-readable storage medium comprising instructions which, when executed by a computer, cause it to carry out the steps of the method according to claim 1 (Guo: Para. 0099, teaching that the invention is implemented using a computer readable medium programmed to perform the steps of the invention).
Regarding claim 12, Guo, Wongpiromsarn, and Lyu remain as applied as in claim 7, and Guo goes on to further teach [t]he method according to claim 7, wherein the at least partially autonomous robot is a vehicle (Guo: Para. 0027, teaching that the invention involves method for evaluating a traffic scene involving multiple vehicles).
Claims 2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Guo in view of Wongpiromsarn in further view of Lyu as applied to claim 1 above, and further in view of previously cited of record Barbour et al. (US Pub. No. 20220405874 A1), herein after Barbour.
Regarding claim 11, Guo, Wongpiromsarn, and Lyu remain as applied as in claim 1, and Wongpiromsarn goes on to further teach [t]he method according to claim 1, wherein: the features are semantic features which are calculated from the current and past states (Wongpiromsarn: Para. 0127, teaching that the data on states of the vehicles can be semantic data).
They are silent to the features are invariant in terms of rotation and translation with respect to coordinates of the traffic scene.
In a similar field, Barbour teaches the features are invariant in terms of rotation and translation with respect to coordinates of the traffic scene (Barbour: Para. 0092, teaching the use of rotation and translation data of an image that is invariant in the use of training a machine-learning module) for the benefit of reducing the compute time and the size of the data that needs to be stored.
It would have been obvious to one ordinarily skilled in the art before the effective filing date of the applicant’s claimed invention to modify the training of machine learning to predict the future states of a vehicle from Guo in view of Wongpiromsarn in further view of Lyu with scenes that are invariant in terms of rotation and translation with respect to coordinates of the scene, as taught by Barbour, for the benefit of reducing the compute time and the size of the data that needs to be stored.
Regarding claim 4, Guo, Wongpiromsarn, and Lyu remain as applied as in claim 1, and Guo goes on to further teach [t]he method according to claim 1, further comprising: using a machine learning model to provide the representation, which comprises a first embedding based at least in part on the features and which comprises a second embedding specifying a topology at the traffic scene (Guo: Para. 0064, teaching the use of neural networks to embed the graphs into vectors that represents various information such as the node-to-node relationships and the topology of the graphs).
They are silent to the first and/or second embedding being invariant in terms of rotation and translation with respect to coordinates for the traffic scene.
In a similar field, Barbour teaches the first and/or second embedding being invariant in terms of rotation and translation with respect to coordinates for the traffic scene (Barbour: Para. 0092, teaching the use of rotation and translation data of an image that is invariant in the use of training a machine-learning module) for the benefit of reducing the compute time and the size of the data that needs to be stored.
It would have been obvious to one ordinarily skilled in the art before the effective filing date of the applicant’s claimed invention to modify the training of machine learning to predict the future states of a vehicle from Guo in view of Wongpiromsarn in further view of Lyu with scenes that are invariant in terms of rotation and translation with respect to coordinates of the scene, as taught by Barbour, for the benefit of reducing the compute time and the size of the data that needs to be stored.
Claims 5, 6, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Guo in view of Wongpiromsarn in further view of Lyu as applied to claim 1 above, and further in view of Shalev-Shwartz et al. (US Pub. No. 20210094577 A1), herein after Shalev-Shwartz.
Regarding claim 5, Guo, Wongpiromsarn, and Lyu remain as applied as in claim 1, and Guo goes on to further teach [t]he method according to claim 1, further comprising: using a machine learning model to provide the representation, the features being based at least in part on the current… states (Guo: Para. 0100, teaching that the invention can be performed by a machine learning algorithm; and Para. 0102, teaching that data regarding the current location and speed of vehicles are inputted into the processes of the invention) and Wongpiromsarn goes on to further teach that the features are based on the past states (Wongpiromsarn: Para. 0061, teaching that the use of current and historical data is utilized to predict future information on a vehicle).
They are silent to [t]he method according to claim 1, further comprising: using a machine learning model to provide the representation, the features… being calculated in a differentiable manner from a state progressions of these states, and a first and/or second embedding of the machine learning model is implemented in a differentiable manner in order to train the machine learning model by way of a differentiable simulation.
In a similar field, Shalev-Shwartz teaches [t]he method according to claim 1, further comprising: using a machine learning model to provide the representation, the features being based at least in part on the current and past states and being calculated in a differentiable manner from the state progressions of these states, and a first and/or second embedding of the machine learning model is implemented in a differentiable manner in order to train the machine learning model by way of a differentiable simulation (Shalev-Shwartz: Para. 0251, teaching calculating a future state of the nodes in a differentiable manner to train a machine learning algorithm) for the benefit of eliminating the problem of error accumulation in the training process.
It would have been obvious to one ordinarily skilled in the art before the filing of the application to modify training of the machine learning model from Guo in view of Wongpiromsarn in further view of Lyu to use differentiable equations in the state progressions of the training, as taught by Shalev-Shwartz, for the benefit of eliminating the problem of error accumulation in the training process.
Regarding claim 6, Guo, Wongpiromsarn, and Lyu remain as applied as in claim 1, and Guo goes on to further teach [t]he method according to claim 1, wherein: the prediction is carried out using of machine learning, the machine learning providing a simulation in which the machine learning is carried out based on a difference between the current… states of the road users (Guo: Para. 0100, teaching that the invention can be performed by a machine learning algorithm; and Para. 0102, teaching that data regarding the current location and speed of vehicles are inputted into the processes of the invention) and Wongpiromsarn goes on to further teach that the features are based on the past states (Wongpiromsarn: Para. 0112, teaching predicting a future trajectory and behavior of a vehicle based on current and historical information on the vehicle).
They are silent to wherein the simulation is implemented as a differentiable simulation.
In a similar field, Shalev-Shwartz teaches wherein the simulation is implemented as a differentiable simulation (Shalev-Shwartz: Para. 0251, teaching calculating a future state of the nodes in a differentiable manner to train a machine learning algorithm) for the benefit of eliminating the problem of error accumulation in the training process.
It would have been obvious to one ordinarily skilled in the art before the filing of the application to modify training of the machine learning model from Guo in view of Wongpiromsarn in further view of Lyu to use differentiable equations in the state progressions of the training, as taught by Shalev-Shwartz, for the benefit of eliminating the problem of error accumulation in the training process.
Regarding claim 9, Guo teaches [a] training method for training a machine learning model for evaluating a traffic scene with a plurality of road users, comprising (Guo: Para. 0027, teaching that the invention involves method for evaluating a traffic scene involving multiple vehicles; and Para. 0100, teaching that the invention can be performed by a machine learning algorithm): providing training data, wherein the training data specifies road users in a traffic scene and associated features, wherein the features are based at least in part on current states… of the road users (Guo: Para. 0102, teaching that data regarding the current location and speed of vehicles are inputted into the invention); and training a… neural network…, wherein the current… states of the road users are represented by nodes of a graph and their relationships in the traffic scene to each other are represented by edges of the graph, wherein the relationships are specified based on the features, and wherein an infrastructure of the traffic scene is represented by a parameterized representation (Guo: Para. 0028, 0049, and 0053, teaching that the information on the vehicles are converted to nodes which are graphed to show a representation between the vehicles including their locations).
Guo is silent to the features being based on the past state of the road users; and that the neural network is a graph neural network trained to predict a future development of the traffic scene, wherein the prediction is trained by a differentiable simulation taking into account the current and past states of the road users to predict a behavior of all represented road users based on the provided representation.
In a similar field, Wongpiromsarn teaches providing training data, wherein the training data specifies road users in a traffic scene and associated features, wherein the features are based at least in part on current and past states of the road users (Wongpiromsarn: Para. 0061, teaching that the use of current and historical data is utilized to predict future information on a vehicle); and training a neural network to predict a future development of the traffic scene (Wongpiromsarn: Para. 0112, teaching predicting a future trajectory and behavior of a vehicle based on current and historical information on the vehicle) for the benefit of reducing the possibilities of collision between vehicles.
It would have been obvious to one ordinarily skilled in the art before the filing of the application to modify the machine learning model for traffic monitoring of multiple vehicles from Guo to predict the future states of the vehicles based on their past states, as taught by Wongpiromsarn, for the benefit of reducing the possibilities of collision between vehicles.
They are silent to the neural network is a graph neural network, and wherein the prediction is trained by a differentiable simulation taking into account the current and past states of the road users to predict a behavior of all represented road users based on the provided representation.
In a similar field, Lyu teaches training a graph neural network to predict a future development of the traffic scene, wherein the current and the past states of the road users are represented by nodes of a graph and their relationships in the traffic scene to each other are represented by edges of the graph, wherein the relationships are specified based on the features, and wherein an infrastructure of the traffic scene is represented by a parameterized representation (Lyu: Para. 0038 and 0039, teaching the use of a recurrent neural network and a graph neural network integrated together to extract and predict the trajectories of the vehicles in a traffic scene and how the behaviors of each vehicle impacts the other vehicles) for the benefit of more accurately predicting the future trajectories of the vehicles.
It would have been obvious to one ordinarily skilled in the art before the filing of the application to modify the traffic monitoring of multiple vehicles from Guo in view of Wongpiromsarn to utilize a graph neural network to predict the future states of the vehicles, as taught by Lyu, for the benefit of more accurately predicting the future trajectories of the vehicles.
They are silent to wherein the prediction is trained by a differentiable simulation taking into account the current and past states of the road users to predict a behavior of all represented road users based on the provided representation.
In a similar field, Shalev-Shwartz teaches wherein the prediction is trained by a differentiable simulation taking into account the current and past states of the road users to predict a behavior of all represented road users based on the provided representation (Shalev-Shwartz: Para. 0251, teaching calculating a future state of the nodes in a differentiable manner to train a machine learning algorithm) for the benefit of eliminating the problem of error accumulation in the training process.
It would have been obvious to one ordinarily skilled in the art before the filing of the application to modify training of the machine learning model from Guo in view of Wongpiromsarn in further view of Lyu to use differentiable equations in the state progressions of the training, as taught by Shalev-Shwartz, for the benefit of eliminating the problem of error accumulation in the training process.
Response to Arguments
Applicant's arguments filed December 8th, 2025 have been fully considered but they are not persuasive.
Applicant’s arguments, see Remarks, filed December 8th, 2025, with respect to objections to the specification in light of the amendments filed have been fully considered and are persuasive. The objections to the specification has been withdrawn.
Applicant’s arguments, see Remarks, filed December 8th, 2025, with respect to 101 rejections of record in light of the amendments filed have been fully considered and are persuasive. The 101 rejection of claim 8 has been withdrawn.
Applicant’s arguments, see Remarks, filed December 8th, 2025, with respect to the rejections of claims 1, 7, and 10-12 under 103 in view of Guo in further view of Wongpiromsarn in light of the amendments to independent claim 1 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Guo in view of Wongpiromsarn in further view of Lyu.
Applicant contends (see page 9 line 8 through page 10 line 7, filed December 8th, 2025) that Guo in view of Wongpiromsarn are deficient in teaching that the nodes of the graph used in the training represent both the current and past states of the vehicle as Guo teaches a graph representation of the current states of a road user while Wongpiromsarn teaches only predicting a future state of the road user based on current and historical data without a graphical representation. The examiner respectfully disagrees. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In this case, the prior art of Wongpiromsarn is being brought into the neural networks of Guo to teach that the nodes of the graphs may denote both the current and historical states of the vehicle.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron K McCullers whose telephone number is (571)272-3523. The examiner can normally be reached Monday - Friday, Roughly 9 AM - 6 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Angela Ortiz can be reached at (571) 272-1206. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.K.M./Examiner, Art Unit 3663
/ANGELA Y ORTIZ/Supervisory Patent Examiner, Art Unit 3663