DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 21 October, 2025 has been entered. Claims 1-10 remain pending in the application.
Response to Arguments
Applicant’s arguments, see Applicant's Remarks, filed 21 October, 2025, with respect to the objection to claim 5 have been fully considered and are persuasive. The claim has been amended to correct the typographical error. The objection to claim 5 has been withdrawn.
Applicant’s arguments with respect to the rejection of claims 1-5 and 7 under 35 USC 102 and the rejection of claims 6 and 8-10 under 35 USC 103 have been considered but are moot because the new ground of rejection does not rely on any teaching or matter specifically challenged in the argument, as necessitated by Applicant’s amendment.
Applicant’s arguments with respect to the rejection of claims 1-10 under 35 USC 101 have been fully considered and are persuasive. The independent claims have been amended to recite a practical application of the recited mental processes. The rejection of claims 1-10 under 35 USC 101 has been withdrawn.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1-5 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sasu (U.S. Patent Application Publication 2020/0255006) in view of Lv (U.S. Patent Application Publication 2018/0074505).
Regarding claim 1, Sasu teaches a computer-implemented method for control of a vehicle, the method comprising: detecting, by a processing system that includes at least one computer, a participant in a traffic scene based on scene-specific information aggregated at a current first point in time, wherein the scene-specific information is obtained from sensor data generated in real time by a sensor system of the vehicle during operation of the vehicle (Paragraph 0046 An exemplary embodiment of the method of determining a kinematic of a target, applied for highway driving scenarios, is represented in FIG. 7. Paragraph 0047 S1: Select a target T when it is detected in the predetermined sensing zone;); reconstructing, by the processing system, a past track profile of the participant's likely actual past positions, orientations, and/or movements that occurred over a sequence of prior points in time preceding the current first point in time (Paragraph 0048 S2: Generate a trace of the target based on position, heading, speed and acceleration;), wherein: the reconstruction (a) is based on the scene-specific information aggregated at the current first point in time (Paragraph 0051 S4.0: once the target is selected and a trace of at least five historical points is detected); and predicting, by the processing system, at least one possible future trajectory of the participant to take place over a sequence of future points in time that will follow the current first point in time based at least in part on the reconstructed past track profile (Paragraph 0052 S4.1: at each further cycle keep projecting the n paths as functions depending on the calculated radius; look at the previously projected paths and add higher probabilities to those which are equal with the newly projected ones;).
However, Sasu does not teach wherein the reconstruction (b) includes, generating virtual perception results of the participant for the sequence of prior points in time; and each of the virtual perception results includes location information and/or movement information of the participant for a respective one of the prior points in time; and automatically controlling, by the processing system, an autonomous driving operation of the vehicle, in continuance of the operation of the vehicle, based on the at least one possible future trajectory that has been predicted for the participant.
Lv, in the same field of endeavor, teaches a system for controlling a vehicle based on a detected other traffic participant. The system detects information regarding the traffic scene at a current time, estimates a past state of the other traffic participant (Paragraph 0034 Sensor fusion factors may comprise factors based on the vehicle motion model and the measurement model. In one embodiment, the sensor fusion factors may be… The past states may be estimated based on measurements. The motion model may depend on road priors. The motion state may be estimated with simultaneous localization and mapping (SLAM) methods.), and controls the vehicle based on the information regarding the traffic scene and past state (Paragraph 0052 At block 530, the future trajectory of the first vehicle may be planned based on the predicted future trajectory of the second vehicle and a safety cost function. The planned future trajectory of the first vehicle may be modeled with a cubic spline comprising a 3rd order polynomial. The future trajectory of the first vehicle may be planned based further on a control smoothness factor, or the road structure, or any combination thereof. At block 540, the first vehicle may be driven to follow the planned trajectory.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to have modified Sasu with the teachings of Lv which teaches estimating a past state of the other traffic participant and controlling the vehicle based on the information regarding the traffic scene and past state in order to improve the estimate of the other traffic participant’s future state (See Lv Paragraph 0036 The future trajectories of the other vehicles may be predicted based on road priors and may be correlated to their past states and the road structure.).
Regarding claim 2, Sasu in view of Lv teaches the method of claim 1 as set forth above. Sasu further teaches wherein each of the virtual perception result includes location information and/or movement information of the participant for a respective one of the prior points in time, including information about a position and/or dimensions and/or orientation and/or speed and/or acceleration and/or rotation rate of the participant (Paragraph 0048 S2: Generate a trace of the target based on position, heading, speed and acceleration;).
Regarding claim 3, Sasu in view of Lv teaches the method of claim 1 as set forth above. Sasu further teaches wherein the prediction of the at least one possible future trajectory for the participant is includes generating virtual perception results for a sequence of points in time in the future (Paragraph 0052 S4.1: at each further cycle keep projecting the n paths as functions depending on the calculated radius; look at the previously projected paths and add higher probabilities to those which are equal with the newly projected ones;).
Regarding claim 4, Sasu in view of Lv teaches the method of claim 3 as set forth above. Sasu further teaches wherein a possibility of combining the reconstructed past track profile and the predicted future trajectory of the participant is taken into account in the reconstruction and the prediction (Paragraph 0052 S4.1: at each further cycle keep projecting the n paths as functions depending on the calculated radius; look at the previously projected paths and add higher probabilities to those which are equal with the newly projected ones;).
Regarding claim 5, Sasu in view of Lv teaches the method of claim 1 as set forth above. Sasu further teaches wherein the prior points in time are a predefined first number of points in time in the past and/or the future points in time are a predefined second number of points in time in the future (Paragraph 0051 S4.0: once the target is selected and a trace of at least five historical points is detected,…).
Regarding claim 7, the claim is commensurate in scope with claim 1 with the exception that claim 7 is directed to a system for control of a vehicle rather than a method for control of a vehicle. Therefore, the same prior art can be applied to claim 7 as was applied to claim 1.
Claim(s) 6 and 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sasu in view of Lv and Pronovost (U.S. Patent Application Publication 2024/0253620).
Regarding claim 6, Sasu teaches the method of claim 1 as set forth above. However, Sasu does not teach wherein a deep learning architecture is used to: a. map the scene-specific information aggregated at the current point in time onto at least one set of latent features, b. ascertain the virtual perception result for the participant for one or more of the prior points in time based on the set of latent features, and c. reconstruct the past track profile of the participant and/or the predict at least one possible future trajectory for the participant.
Pronovost, in the same field of endeavor, teaches a method for predicting future tracks of participants in a traffic scene wherein a deep learning architecture is used (Paragraph 0026 FIG. 1 illustrates an autonomous vehicle (vehicle 102) in an example environment 100, in which an example machine learned model (prediction component 104) may process input data (input data 106) to generate example output data (output data 108) representing a scene and/or predict state data associated with an autonomous vehicle and/or an object in the environment 100.) to: a. map the scene-specific information aggregated at a given point in time onto at least one set of latent features (Paragraph 0031 For example, the prediction component 104 can represent a diffusion model that is configured to, based at least in part on receiving map data and occupancy data or bounding box data associated with a first time as input, output discrete latent variables associated with a second time subsequent to the first time.), b. ascertain the perception result for the participant and the given point in time based on the set of latent features (Paragraph 0031 In another example, a decoder can receive map data ( e.g., a roadway, a crosswalk, a building, etc.) and discrete latent variables (e.g., values representing, for a time, an attribute or state of an environment, an object, or a vehicle in a latent space) as input and output object states for multiple objects in an environment for the time including occupancy data or bounding box data.), and c. reconstruct at least one past track profile of the participant and/or predict at least one possible future trajectory for the participant (Paragraph 0032 The prediction component 104 may, in various examples, represent a track component that is configured to receive object states for multiple objects in an environment for multiple times (e.g., occupancy data or bounding box data) and generate predicted object tracks for respective objects.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to have modified Sasu with the teachings of Pronovost which teaches using a deep learning architecture to: a. map the scene-specific information aggregated at a given point in time onto at least one set of latent features, b. ascertain the perception result for the participant and the given point in time based on the set of latent features, and c. reconstruct at least one past track profile of the participant and/or predict at least one possible future trajectory for the participant in order to safely operate a vehicle in the vicinity of the tracked participant (See Pronovost Paragraph 0001 Accurately predicting future object tracks may be necessary to safely operate the vehicle in the vicinity of the object.).
Regarding claim 8, Sasu teaches the system of claim 7 as set forth above. However, Sasu does not teach wherein the processing system is programmed with a first trained neural network, which generates at least one set of latent features for the participant based on the scene-specific information.
Pronovost, in the same field of endeavor, teaches a method for predicting future tracks of participants in a traffic scene wherein the input stage includes a first trained neural network (Paragraph 0026 FIG. 1 illustrates an autonomous vehicle (vehicle 102) in an example environment 100, in which an example machine learned model (prediction component 104) may process input data (input data 106) to generate example output data (output data 108) representing a scene and/or predict state data associated with an autonomous vehicle and/or an object in the environment 100.), which generates at least one set of latent features for the participant based on scene-specific information aggregated at the given point in time (Paragraph 0031 For example, the prediction component 104 can represent a diffusion model that is configured to, based at least in part on receiving map data and occupancy data or bounding box data associated with a first time as input, output discrete latent variables associated with a second time subsequent to the first time.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to have modified Sasu with the teachings of Pronovost which teaches wherein the input stage includes a first trained neural network, which generates at least one set of latent features for the participant based on scene-specific information aggregated at the given point in time in order to safely operate a vehicle in the vicinity of the tracked participant (See Pronovost Paragraph 0001 Accurately predicting future object tracks may be necessary to safely operate the vehicle in the vicinity of the object.).
Regarding claim 9, Sasu in view of Pronovost teaches the system of claim 8 as set forth above. However, Sasu does not teach wherein the processing system is programmed with a second trained neural network, which, using the set of latent features generated by the first trained neural network, performs the reconstruction and/or prediction.
Pronovost, in the same field of endeavor, teaches a method for predicting future tracks of participants in a traffic scene wherein the predictor includes a second trained neural network, which, using the set of latent features generated by the input stage, reconstructs at least one past track profile for the participant and/or predicts at least one possible future trajectory for the participant (Paragraph 0032 The prediction component 104 may, in various examples, represent a track component that is configured to receive object states for multiple objects in an environment for multiple times (e.g., occupancy data or bounding box data) and generate predicted object tracks for respective objects.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to have modified Sasu with the teachings of Pronovost which teaches wherein the predictor includes a second trained neural network, which, using the set of latent features generated by the input stage, reconstructs at least one past track profile for the participant and/or predicts at least one possible future trajectory for the participant in order to safely operate a vehicle in the vicinity of the tracked participant (See Pronovost Paragraph 0001 Accurately predicting future object tracks may be necessary to safely operate the vehicle in the vicinity of the object.).
Regarding claim 10, Sasu in view of Pronovost teaches the system of claim 8 as set forth above. However, Sasu does not teach wherein the first neural network has aggregated information about a history of track profiles by being jointly trained with a second neural network.
Pronovost, in the same field of endeavor, teaches a method for predicting future tracks of participants in a traffic scene wherein the first neural network of the input stage has aggregated information about the history of track profiles by jointly training with a second neural network of the predictor (Paragraph 0036 A training component associated with the computing device(s) 736 (not shown) and/or the vehicle computing device(s) 704 (not shown) may be implemented to train the prediction component 104. Training data may include a wide variety of data, such as sensor data, map data, bounding box data, real-world or labelled scenes, etc., that is associated with a value (e.g., a desired classification, inference, prediction, etc.).).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to have modified Sasu with the teachings of Pronovost which teaches wherein the first neural network of the input stage has aggregated information about the history of track profiles by jointly training with a second neural network of the predictor in order to safely operate a vehicle in the vicinity of the tracked participant (See Pronovost Paragraph 0001 Accurately predicting future object tracks may be necessary to safely operate the vehicle in the vicinity of the object.).
Conclusion
The prior art made of the record and not relied upon is considered pertinent to
applicant’s disclosure.
Choi – U.S. Patent Application Publication 2023/0419080
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK D MOHL whose telephone number is (571)272-8987. The examiner can normally be reached M-Th 6:00AM-4:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PATRICK DANIEL MOHL/Examiner, Art Unit 3666
/ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666