Prosecution Insights
Last updated: April 19, 2026
Application No. 18/269,209

SYSTEMS AND METHODS RELATED TO CONTROLLING AUTONOMOUS VEHICLE(S)

Final Rejection §103
Filed
Jun 22, 2023
Examiner
HASSANIARDEKANI, HAJAR
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Aurora Operations, Inc.
OA Round
3 (Final)
88%
Grant Probability
Favorable
4-5
OA Rounds
3y 0m
To Grant
62%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
7 granted / 8 resolved
+35.5% vs TC avg
Minimal -25% lift
Without
With
+-25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
34 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Application Claims 1-4, 6-21 are pending. Claims 1, 19, and 20 are the independent claims. Claims 1-2, 6, 19-20 have been amended. Claim 5 had been previously cancelled. Claim 21 is newly added. This office action is in response to the Amendments received on 12/08/2025. Response to Arguments With respect to Applicant’s remarks filed on 12/08/2025, “Applicant Arguments/Remarks Made in an Amendment” have been fully considered. According to the applicant remarks, (page 10, title “35 U.S.C § 103 Rejections”), and in response to the rejections of claims 1-2 and 19-20 under 35 U.S.C § 103 in non-final office action filed on 09/16/2025, applicant has amended claims 1-2, and 19-20 to further clarify the limitation that was previously recited as “[] probability distribution over a plurality of decisions for how to navigate a given stream []”. The amended claims as currently presented recite “wherein the further predicted output for each of the plurality of streams and with respect to each of the plurality of actors comprises a respective probability distribution over a plurality of AV control decisions for how [[to]] the AV should navigate a given stream, of the plurality of streams, based on each of the plurality of actors;”. Due to the nature of the applicant’s amendments to claims 1-2, and 19-20 as discussed above, and also the newly added claim 21, the scope of the applicant’s claimed invention has changed and thus requires new analysis and new application of prior art, as mapped below in the Non-final Office Action. Office Note: Due to applicant’s amendments, further claim rejections appear on the record as stated in the below Office Action. It is the Office’ stance that all of applicant arguments have been considered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, and 8-21 are rejected under 35 U.S.C. 103 as being unpatentable over Gochev et al., US 20190161080 A1, hereinafter “Gochev”, in view of Amini et al., US 20200088525 A1, hereinafter “Amini” Regarding claim 1, Gochev discloses a method for training one or more machine learning ("ML") models for use by an autonomous vehicle ("AV") (Abstract, “Systems and methods for controlling the motion of an autonomous”, Para [0006]-[0007], Para [0032], “the blocking model can include a machine-learned model”, Para [0034], “The vehicle action model can be a machine-learned model configured to determine a plurality of vehicle actions for the autonomous vehicle.”), the method comprising: obtaining a plurality of actors for a past episode of locomotion of a vehicle, each of the plurality of actors corresponding to an object in an environment of the vehicle during the past episode (Para [0005], “The method includes obtaining,…, data associated with an object within a surrounding environment of an autonomous vehicle.”, Para [0023], “The state data can be indicative of one or more states (e.g., current or past state(s)) of one or more objects that are within the surrounding environment of the autonomous vehicle.”), Para [0028], [0057]); obtaining a plurality of streams in the environment of the vehicle during the past episode (Para [0021], “obtain map data”, “The map data can provide information regarding: the identity and location of different roadways, road segments, buildings, sidewalks, or other items; the location and directions of traffic lanes (e.g., the boundaries, location, direction, etc. of a parking lane, a turning lane, a bicycle lane, or other lanes within a particular travel way)”), each of the plurality of streams representing a candidate navigation path, for the vehicle or the object corresponding to a given one of the actors, in the environment of the vehicle (Para [0022], “the autonomy computing system can include a perception system, a prediction system, and a motion planning system.”, Para [0027], “predicted motion trajectory”, Para [0058], “the prediction system 126 can determine a predicted trajectory of an object. The predicted trajectory can be indicative of a predicted path that the object is predicted to travel over time and the timing associated therewith.”); processing, using one or more ML model layers of one or more of the ML models, the plurality of actors and the plurality of streams to generate predicted output for each of the plurality of actors (Para [0029], “the vehicle computing system can determine a respective vehicle action for the autonomous vehicle at each of the respective time steps based at least in part on whether the object is blocking or not blocking the autonomous vehicle at the respective time step.”, Para [0030], Para [0032], “the blocking model can include a machine-learned model (e.g., a machine-learned blocking model).”); processing, using one or more additional ML model layers of one or more of the ML models, the predicted output for each of the plurality of actors to generate further predicted output for each of the plurality of streams and with respect to each of the plurality of actors (Para [0019], “The autonomous vehicle can predict a trajectory by which the object is to travel over a certain time period”, “The autonomous vehicle can utilize the vehicle action sequence to plan its motion and autonomously navigate through its environment.”, Para [0022], [0024], “prediction system”, Para [0029], “the predicted motion trajectory of the object”, “Then, the vehicle computing system can determine a respective vehicle action for the autonomous vehicle at each of the respective time steps based at least in part on whether the object is blocking or not blocking the autonomous vehicle at the respective time step. ”, Para [0033]- [0034], “The vehicle action model can be a machine-learned model”); generating, based on one or more reference labels for the past episode of locomotion and the further predicted output for each of the plurality of streams and with respect to each of the plurality of actors, one or more losses; and updating, based on the one or more losses, one or more of the additional ML model layers of one or more of the ML models (Para [0036], “the machine-learned vehicle action can be trained using supervised training techniques based on training data.”, “These labels can be utilized as ground-truth data to determine the accuracy and/or development of the vehicle action model as it is trained.”, Para [0078] and Fig. 2B, “The model trainer 254 can evaluate a training output 256 of the vehicle action model 138 to determine the accuracy and/or confidence level of the model as it is trained over time. The model trainer 254 can continue to train the vehicle action model 138 until a sufficient level of accuracy and/or confidence is achieved.”), wherein one or more of the additional ML model layers of one or more of the ML models are subsequently utilized in controlling the AV (Para [0019], [0032]- [0038]). Gochev doesn’t explicitly teaches wherein the further predicted output for each of the plurality of streams and with respect to each of the plurality of actors comprises a respective probability distribution over a plurality of AV control decisions for how the AV should navigate a given stream, of the plurality of streams, based on each of the plurality of actors. However, Amini teaches wherein the further predicted output for each of the plurality of streams and with respect to each of the plurality of actors comprises a respective probability distribution over a plurality of AV control decisions for how the AV should navigate a given stream, of the plurality of streams, based on each of the plurality of actors ([0034], “one or more steering trajectories available to a vehicle pertain to an external road agent detected by an ego vehicle [] computation module 230 can, based at least in part on the parameters of the probability distribution, predict that the road agent will travel along a particular one of the one or more steering trajectories available to the road agent. The ego vehicle (vehicle 100) can then respond in accordance with the predicted trajectory of the external road agent, if necessary., __at least the cited paragraph of the reference reads on predicted output of the plurality of streams with respect to each of plurality of actors as recited in the claim_”, [0039], “probabilistic control output 350 includes parameters for a probability distribution (e.g., the GMM parameters) for the trajectories 430a, 430b, and 430c. In some embodiments, computation module 230 can predict that the ego vehicle will travel along a particular one of the available steering trajectories based, at least in part, on the parameters of the probability distribution (e.g., the GMM parameters)”, [0041], __according to at least cited paragraphs, Amini discloses predicting a trajectory/ stream that an ego vehicle (autonomous vehicle) should take by calculating the probability of distribution for different trajectories that the ego vehicle can steer based on the prediction of the trajectory that the external road agent (reads on actors) takes which reads on the claimed limitations __ ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev, to include the step of determining/outputting a selection of an stream/trajectory from among plurality of streams to navigate the vehicle based on the probability distribution of different trajectories and based on each of the plurality of actors, as taught by Amini, with a reasonable expectation of success, in order to allow the model to capture multiple possible outputs and providing broader information for safe decision-making. This probabilistic output enables the autonomous vehicle to select navigation strategies that balance, safety and efficiency under uncertainty. Regarding claim 2, although Gochev discloses wherein the one or more additional ML layers correspond to a plurality of disparate deciders (Para [0025], “The motion plan can include vehicle actions with respect to the objects within the surrounding environment of the autonomous vehicle as well as the predicted movements.”, Para [0027], “control the motion of the autonomous vehicle by determining a plurality of vehicle actions for a given object during each motion planning cycle.”, Para [0037], “the vehicle action sequence can include the discrete vehicle action decided for each respective time step of the predicted object trajectory.”), however, Gochev doesn’t explicitly disclose wherein the respective probability distribution over a plurality of AV control decisions comprises an associated probability for a predicted decision made by each decider, of the plurality of disparate deciders, for each of the plurality of streams and with respect to each of the plurality of actors. Nevertheless, Amini teaches the respective probability distribution over a plurality of AV control decisions comprises an associated probability for a predicted decision made by each decider, of the plurality of disparate deciders, for each of the plurality of streams and with respect to each of the plurality of actors ( at least Para [0020], [0034], [0039], “the probabilistic control output 350 includes parameters for a probability distribution (e.g., the GMM parameters) for the trajectories 430a, 430b, and 430c.”, __ Note: the underlined part reads on probability distribution over a plurality of AV control decisions (which are different trajectories that the AV control predicts for travelling of the AV). The prediction is based on the probability distribution of different predicted trajectories (decisions) and base on the road agents movement as disclosed according to at least paragraphs [0020], and [0041]__ , Para [0041], “Three possible trajectories available to external road agent 530 are marked with heavy lines as trajectory 520a (left turn), trajectory 520b (proceed straight), and trajectory 520c (right turn), where the trajectories are named from the perspective of external road agent 530. In the embodiment discussed above in connection with FIG. 3, the probabilistic control output 350 includes parameters for a probability distribution (e.g., the GMM parameters) for the trajectories 520a, 520b, and 520c. In some embodiments, computation module 230 can predict that the external road agent 530 will travel along a particular one of the available steering trajectories based, at least in part, on the parameters of the probability distribution, as discussed above. In some embodiments, control module 250 can control one or more vehicle systems 140 of vehicle 100 such as steering system 143 in response to the predicted trajectory of the external road agent 530.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev, to include the step of determining/outputting a selection of an stream/trajectory from among plurality of streams to navigate the vehicle based on the probability distribution of different trajectories and based on each of the plurality of actors, as taught by Amini, with a reasonable expectation of success, in order to allow the model to capture multiple possible outputs and providing broader information for safe decision-making. This probabilistic output enables the autonomous vehicle to select navigation strategies that balance, safety and efficiency under uncertainty. Regarding claim 8, Gochev discloses wherein the further predicted output further comprises a predicted vehicle control strategy or predicted vehicle control commands (Para [0005], “controlling autonomous vehicle motion.”, Para [0019]-[0020], [0022], “determine a motion plan for controlling the motion of the autonomous vehicle.”, Para [0038]-[0040], [0061], [0085]-[0087], [0094]). Regarding claim 9, Gochev discloses wherein the one or more reference labels comprise an associated reference label that corresponds to a ground truth vehicle control strategy or ground truth vehicle control commands that are determined during the past episode of locomotion of the vehicle or that is defined for the vehicle subsequent to the past episode of locomotion of the vehicle (Para [0036], “These labels can be utilized as ground-truth data to determine the accuracy and/or development of the vehicle action model as it is trained.”, Para [0078]). Regarding claim 10, Gochev discloses wherein generating one or more of the losses comprises comparing the predicted vehicle control strategy or the predicted vehicle control commands to the ground truth vehicle control strategy or the ground truth vehicle control commands to generate one or more of the losses; (Para [0036], “the machine-learned vehicle action can be trained using supervised training techniques based on training data.”, “These labels can be utilized as ground-truth data to determine the accuracy and/or development of the vehicle action model as it is trained.”, Para [0078] and Fig. 2B, “The model trainer 254 can evaluate a training output 256 of the vehicle action model 138 to determine the accuracy and/or confidence level of the model as it is trained over time. The model trainer 254 can continue to train the vehicle action model 138 until a sufficient level of accuracy and/or confidence is achieved.”, __determining the accuracy and/or confidence level of the model as it is trained reads on generating losses__) and wherein updating the one or more additional ML model layers comprises backpropagating one or more of the losses across the one or more additional ML model layers. (Para [0115], “One example training technique is backwards propagation of errors.”) Regarding claim 11, Gochev discloses wherein each stream, of the plurality of streams, corresponds to a sequence of poses that represent the candidate navigation path, in the environment of the vehicle, for the vehicle or the object corresponding to a given one of the actors. (Para [0021]-[0030]) Regarding claim 12, Gochev discloses wherein each stream, of the plurality of streams, is at least one of: a target stream corresponding to the candidate navigation path the vehicle will follow, a joining stream that merges into the target stream, a crossing stream that is transverse to the target stream, an adjacent stream that is parallel to the target stream, or an additional stream that is one-hop from the joining stream, the crossing stream, or the adjacent stream. (Para [0021]- [0022], “The map data can provide information regarding: the identity and location of different roadways, road segments,”, Para [0054], “determine a vehicle route for the vehicle 104 based at least in part on the map data 120.”) Regarding claim 13, Gochev discloses The method of claim 1, wherein the object corresponding to each of the one or more actors is at least one of: an additional vehicle that is in addition to the vehicle, a bicyclist, or a pedestrian. (Para [0019], “presence of an object (e.g. pedestrian, vehicle, bicycle, or other object) that is within proximity of the autonomous vehicle.”) Regarding claim 14, The method of claim 13, wherein the object is dynamic in the environment of the vehicle along a particular stream of the plurality of streams. (Para [0020], “The object(s) can be static (e.g., not in motion) or dynamic (e.g., actors in motion).”). Regrading claim 15, Gochev discloses wherein subsequently utilizing one or more of the additional ML model layers of one or more of the ML models in controlling the AV (e.g., Para [0032]-[0036]) comprises: processing, using the one or more ML model layers and the one or more additional ML model layers, sensor data generated by one or more sensors of the AV (Para [0023], “the perception system can process the sensor data from the sensor(s) to detect the one or more objects that are proximate to the autonomous vehicle as well as state data associated therewith.”, Para [0068], [0079]) to predict an AV control strategy or predict AV control commands; and causing the AV to be controlled based on the predicted AV control strategy or the predicted AV control commands. (e.g., Para [0034], [0056], “generate an appropriate motion plan through such surrounding environment. The autonomy computing system 114 can control the one or more vehicle control systems 116 to operate the vehicle 104 according to the motion plan.”) Regarding claim 16, Gochev discloses the method of claim 15, further comprising: ranking a plurality of AV control strategies based on the processing, wherein the predicted AV control strategy is a highest ranked AV control strategy. (Para [0025]-[0026], “Once the optimization planner has identified the optimal motion plan (or some other iterative break occurs), the optimal motion plan (and the planned motion trajectory) can be selected and executed by the autonomous vehicle.”, Para [0034], [0038]). Regarding claim 17, Gochev discloses the method of claim 1, wherein the one or more ML model layers comprise a first portion of a given one of the one or more ML models, and wherein the one or more additional ML model layers comprise a second portion of the given one of the one or more ML models. (Para [0029], “the vehicle computing system can determine a respective vehicle action”, Para [0034], “The vehicle action model can be a machine-learned model”, Para [0036], [0078]) Regarding claim 18, Gochev discloses wherein the one or more ML model layers comprise a first one of the one or more ML models, and wherein the one or more additional ML model layers comprise at least a second one of the one or more ML models. (e.g., Para [0032], [0035], [0066]) Regarding claim 19, Gochev discloses a method for training one or more machine learning ("ML") models for use by an autonomous vehicle ("AV") (e.g., Abstract, “Systems and methods for controlling the motion of an autonomous”, Para Para [0006]-[0007], [0032], “the blocking model can include a machine-learned model”, Para [0034], “The vehicle action model can be a machine-learned model configured to determine a plurality of vehicle actions for the autonomous vehicle.”, Para [0041], [0089], Claim 7), the method comprising: obtaining a plurality of training instances from a past episode of locomotion of a vehicle (e.g., Para [0057], “ obtain state data 130 that is indicative of one or more states (e.g., current and/or past state(s)) of one or more objects that are within a surrounding environment of the vehicle 104.”), each of the plurality of training instances comprising: training instance input, the training instance input comprising: predicted output generated using one or more ML model layers of one or more of the ML models (Para [0089], “The vehicle computing system 102 can input data indicative of the predicted motion trajectory 406 of the object 404 into the blocking model 136. The vehicle computing system can also input data indicative of the motion trajectory of the vehicle 104 into the blocking model 136. ” ), the predicted output being generated based on a plurality of actors and a plurality of streams (Para [0005], “The method includes determining, by the computing system, a motion plan for the autonomous vehicle based at least in part on the vehicle action sequence.”,Para [0019], “The autonomous vehicle can determine a vehicle action sequence based at least in part on the blocking information and other data (e.g., vehicle motion parameters, map data, object state data, etc.).”, Para [0033], [0057], “For example, the vehicle computing system 102 can process the sensor data 118, the map data 120, etc. to obtain state data 130.”) each of the plurality of actors corresponding to an object in an environment of the vehicle during the past episode (Para [0005], “The method includes obtaining,…, data associated with an object within a surrounding environment of an autonomous vehicle.”, Para [0023], “The state data can be indicative of one or more states (e.g., current or past state(s)) of one or more objects that are within the surrounding environment of the autonomous vehicle.” ), and each of the plurality of streams representing a candidate navigation path in the environment of the vehicle (Para [0027], “predicted motion trajectory”); and training instance output, the training instance output comprising: one or more associated reference labels for the past episode of locomotion, each of the one or more associated reference labels corresponding to an action performed by the vehicle during the past episode of locomotion; and training one or more additional ML layers of one or more of the ML models based on the plurality of training instances, wherein one or more of the additional ML model layers of one or more of the ML models are subsequently utilized in controlling the AV. (Para [0077]-[0078]). Gochev doesn’t explicitly disclose the predicted output comprising a respective probability distribution over a plurality of AV control decisions for how to navigate a given stream, of the plurality of streams, based on each of the plurality of actors; However, Amini teaches the predicted output comprising a respective probability distribution over a plurality of AV control decisions for how the AV should navigate a given stream, of the plurality of streams, based on each of the plurality of actors (at least Para [0020], [0034], “one or more steering trajectories available to a vehicle pertain to an external road agent detected by an ego vehicle [] computation module 230 can, based at least in part on the parameters of the probability distribution, predict that the road agent will travel along a particular one of the one or more steering trajectories available to the road agent. The ego vehicle (vehicle 100) can then respond in accordance with the predicted trajectory of the external road agent, if necessary., [0039], “probabilistic control output 350 includes parameters for a probability distribution (e.g., the GMM parameters) for the trajectories 430a, 430b, and 430c. In some embodiments, computation module 230 can predict that the ego vehicle will travel along a particular one of the available steering trajectories based, at least in part, on the parameters of the probability distribution (e.g., the GMM parameters)”, [0041], __according to at least cited paragraphs, Amini discloses predicting a trajectory/stream from among different trajectories that an ego vehicle (autonomous vehicle) should take by calculating the probability of distribution for different possible trajectories that the ego vehicle can steer based on the prediction of the trajectory that external road agents (reads on plurality of actors), takes. Accordingly, at least the cited aragraphs of Amini meet the claimed limitations.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev, to include the step of determining/outputting a selection of an stream/trajectory from among plurality of streams to navigate the vehicle based on the probability distribution of different trajectories and based on each of the plurality of actors, as taught by Amini, with a reasonable expectation of success, in order to allow the model to capture multiple possible outputs and providing broader information for safe decision-making. This probabilistic output enables the autonomous vehicle to select navigation strategies that balance, safety and efficiency under uncertainty. Regarding claim 20, A system for training one or more machine learning ("ML") models for use by an autonomous vehicle ("AV"), the system comprising: at least one processor; and at least one memory storing instructions (Para [0006]) that, when executed, cause the at least one processor to: obtain a plurality of actors for a past episode of locomotion of a vehicle, each of the plurality of actors corresponding to an object in an environment of the vehicle during the past episode (Para [0116], “sets of data from previous events (e.g., driving log data associated with previously observed objects).”, Para [0023], “he state data can be indicative of one or more states (e.g., current or past state(s)) of one or more objects that are within the surrounding environment of the autonomous vehicle.”); obtain a plurality of streams in the environment of the vehicle during the past episode, each of the plurality of streams representing a candidate navigation path, for the vehicle or the object corresponding to a given one of the actors, in the environment of the vehicle (Para [0027]-[0028]); process, using one or more ML model layers of one or more of the ML models, the plurality of actors and the plurality of streams to generate predicted output for each of the plurality of actors (Para [0029], “the vehicle computing system can determine a respective vehicle action for the autonomous vehicle at each of the respective time steps based at least in part on whether the object is blocking or not blocking the autonomous vehicle at the respective time step.”, Para [0030]); process, using one or more additional ML model layers of one or more of the ML models, the predicted output for each of the plurality of actors to generate further predicted output for each of the plurality of streams and with respect to each of the plurality of actors (Para [0019], “The autonomous vehicle can predict a trajectory by which the object is to travel over a certain time period”, “The autonomous vehicle can utilize the vehicle action sequence to plan its motion and autonomously navigate through its environment.”, Para [0022], [0024], “prediction system”, Para [0029]-[0034], “the predicted motion trajectory of the object”); generate, based on one or more reference labels for the past episode of locomotion and the further predicted output for each of the plurality of streams and with respect to each of the plurality of actors, one or more losses; and update, based on the one or more losses, one or more of the additional ML model layers of one or more of the ML models (Para [0036], “the machine-learned vehicle action can be trained using supervised training techniques based on training data.”, “These labels can be utilized as ground-truth data to determine the accuracy and/or development of the vehicle action model as it is trained.”, Para [0078] and Fig. 2B, “The model trainer 254 can evaluate a training output 256 of the vehicle action model 138 to determine the accuracy and/or confidence level of the model as it is trained over time. The model trainer 254 can continue to train the vehicle action model 138 until a sufficient level of accuracy and/or confidence is achieved.”), wherein one or more of the additional ML model layers of one or more of the ML models are subsequently utilized in controlling the AV (Para [0019], [0032]- [0038]). Gochev doesn’t explicitly teaches wherein the further predicted output for each of the plurality of streams and with respect to each of the plurality of actors comprises a respective probability distribution over a plurality of AV control decisions for how the AV should navigate a given stream, of the plurality of streams, based on each of the plurality of actors. However, Amini teaches wherein the further predicted output for each of the plurality of streams and with respect to each of the plurality of actors comprises a respective probability distribution over a plurality of AV control decisions for how the AV should navigate a given stream, of the plurality of streams, based on each of the plurality of actors ([0034], “one or more steering trajectories available to a vehicle pertain to an external road agent detected by an ego vehicle [] computation module 230 can, based at least in part on the parameters of the probability distribution, predict that the road agent will travel along a particular one of the one or more steering trajectories available to the road agent. The ego vehicle (vehicle 100) can then respond in accordance with the predicted trajectory of the external road agent, if necessary., __at least the cited paragraph of the reference reads on predicted output of the plurality of streams with respect to each of plurality of actors as recited in the claim_”, [0039], “probabilistic control output 350 includes parameters for a probability distribution (e.g., the GMM parameters) for the trajectories 430a, 430b, and 430c. In some embodiments, computation module 230 can predict that the ego vehicle will travel along a particular one of the available steering trajectories based, at least in part, on the parameters of the probability distribution (e.g., the GMM parameters)”, [0041], __according to at least cited paragraphs, Amini discloses predicting a trajectory/ stream that an ego vehicle (autonomous vehicle) should take by calculating the probability of distribution for different trajectories that the ego vehicle can steer based on the prediction of the trajectory that the external road agent (reads on actors) takes which reads on the claimed limitations __ ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev, to include the step of determining/outputting an a selection of an stream/trajectory from among plurality of streams to navigate the vehicle based on the probability distribution of different trajectories and based on each of the plurality of actors, as taught by Amini, with a reasonable expectation of success, in order to allow the model to capture multiple possible outputs and providing broader information for safe decision-making. This probabilistic output enables the autonomous vehicle to select navigation strategies that balance, safety and efficiency under uncertainty. Regarding claim 21, Gochev in view of Amini teaches the method of claim 1, however, Gochev doesn’t teach wherein the plurality of AV control decisions comprise two or more of: a yield decision indicative of whether the AV should yield to one or more of the plurality of actors while navigating the given stream, a merge decision indicative of whether the AV should merge into the given stream or a given additional stream, or an intersection entry decision indicative of whether the AV should enter an intersection along the given stream. However, Amini teaches wherein the plurality of AV control decisions comprise two or more of: a yield decision indicative of whether the AV should yield to one or more of the plurality of actors while navigating the given stream, a merge decision indicative of whether the AV should merge into the given stream or a given additional stream, or an intersection entry decision indicative of whether the AV should enter an intersection along the given stream ([0039], “control module 250 can control one or more vehicle systems 140 such as steering, braking, and/or acceleration in response to a predicted trajectory of the ego vehicle.”, [0049], “In some embodiments, control module 250 can also control various vehicle systems 140 as needed in response to a predicted trajectory of the ego vehicle or a predicted trajectory of an external road agent 530, as discussed above.” __various control system 140 includes braking system (paragraph [0030]), therefore at least according to the excerpt of the cited paragraphs, Amini teaches yield to one or more of the plurality of actors while navigating the given stream__, and also according to at least paragraphs [0040]-[0041], and Fig. 4A, Amini teaches an intersection entry decision__, [0049]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev, to include the step of generating predicted output for each of the actors and each of the stream (with respect to each of the actor’s predicted output), by calculating probability distribution over a plurality of AV control decisions for how the AV should navigate a given stream (including, for example, if the AV should yield to an actor, or if the AV should enter an intersection), as taught by Amini, with a reasonable expectation of success, with the motivation of allowing the model to capture multiple possible outputs and providing broader information for safe decision-making. Predicting movement of actors on the road, predicting different possible scenarios of AV controls in navigating through a stream from a plurality of stream with respect to the predicted output for the plurality of actors in the environment of the vehicle, and making decisions of how to control the AV accordingly, enhances the reliability and safety of the AV control system. Claims 3-4 and 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Gochev, in view of Amini, further in view of Omari et al, US 20210200221 A1, hereinafter “Omari”. Regarding claim 3, although Gochev discloses wherein the one or more reference labels comprise an associated reference label, for each of the plurality of disparate deciders, that corresponds to a ground truth decision that is determined during the past episode of locomotion of the vehicle or that is defined for the vehicle subsequent to the past episode of locomotion of the vehicle (Para [0036], “the machine-learned vehicle action can be trained using supervised training techniques based on training data.”, “These labels can be utilized as ground-truth data to determine the accuracy and/or development of the vehicle action model as it is trained.”, Para [0068]-[0071]). However, Gochev doesn’t explicitly discloses that the one or more reference labels corresponds to a ground truth probability for a decision that is determined… Nevertheless, Omari teaches the one or more reference labels comprise an associated reference label, for each of the plurality of disparate deciders, that corresponds to a ground truth probability for a decision that is determined during the past episode of locomotion of the vehicle or that is defined for the vehicle subsequent to the past episode of locomotion of the vehicle (Para [0025]-[0027], [0032], “ground-truth probability distribution.”, “generate data corresponding to an inferred ground truth. In the context of training of prioritization model 505, the ground truth serves as the target or desired output for the associated training sample and may be referred to as the ground truth label for that training sample.”, “Based on the prerecorded sensor data indicating the locations of all the agents at time t.sub.1, the ground-truth for determining the accuracy of the prediction may be generated.”, [0033], Figs. 3-5). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev in view of Amini, to include the ground truth probability distribution as taught by Omari, with a reasonable expectation of success, in order to represent the real world uncertainty in the machine learning model which is used to approximate the true, real-world probability distribution of outcomes. Regarding claim 4, Gochev discloses wherein generating one or more of the losses comprises comparing the associated predicted decision made by each of the plurality of disparate deciders to the ground truth decision, for each of the plurality of deciders, to generate one or more of the losses; and wherein updating the one or more additional ML model layers comprises backpropagating one or more of the losses across the one or more additional ML model layers (Para [0036], “the machine-learned vehicle action can be trained using supervised training techniques based on training data.”, “These labels can be utilized as ground-truth data to determine the accuracy and/or development of the vehicle action model as it is trained.”, Para [0078] and Fig. 2B, “The model trainer 254 can evaluate a training output 256 of the vehicle action model 138 to determine the accuracy and/or confidence level of the model as it is trained over time. The model trainer 254 can continue to train the vehicle action model 138 until a sufficient level of accuracy and/or confidence is achieved.”, __determining the accuracy and/or confidence level of the model as it is trained reads on generating losses__). However, Gochev doesn’t explicitly discloses generating one or more of the losses comprises comparing one or more of the associated probability for the predicted decision made by each of the plurality of disparate deciders to one or more of the ground truth probability. Nevertheless, Omari teaches generating one or more of the losses comprises comparing one or more of the associated predicted probability for the predicted decision made by each of the plurality of disparate deciders to one or more ground truth probability for the decision, for each of the plurality of deciders, to generate one or more of the losses ( at least [0022], [0025], “ground-truth for determining the accuracy of the prediction may be generated.”, [0026]-[0027], [0032], “compare the predicted probability distribution generated by prioritization model 505 with a ground-truth probability distribution”, [0034], “the predicted contextual representation may then be compared to the known second contextual representation (i.e., the ground-truth at time t.sub.1). The comparison may be quantified by a loss value or computed using a loss function.”, [0046]); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev in view of Amini, to include generating the losses by comparing the predicted probability distribution to the ground truth probability distribution as taught by Omari, with a reasonable expectation of success, in order to evaluate/measure the accuracy of the model in capturing uncertainty and making better probabilistic prediction. Regarding claim 6, Gochev discloses wherein the one or more reference labels comprise an associated reference label, for each of the plurality of disparate deciders, that corresponds to a ground truth that is determined during on the past episode of locomotion of the vehicle or that is defined for the vehicle subsequent to the past episode of locomotion of the vehicle (See rejection of claim 3). Gochev in view of Amini doesn’t disclose one or more reference labels comprise an associated reference label, for each of the plurality of disparate deciders, that corresponds to a ground truth probability distribution. However, Omari teaches disclose one or more reference labels comprise an associated reference label, for each of the plurality of disparate deciders, that corresponds to a ground truth probability distribution (at least Para [0020], [0032], “In the context of training of prioritization model 505, the ground truth serves as the target or desired output for the associated training sample and may be referred to as the ground truth label for that training sample.”, “compare the predicted probability distribution generated by prioritization model 505 with a ground-truth probability distribution.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev in view of Amini, to include the ground truth probability distribution as taught by Omari, with a reasonable expectation of success, in order to represent the real world uncertainty in the machine learning model which is used to approximate the true, real-world probability distribution of outcomes. Regarding claim 7, Gochev discloses generating one or more of the losses for each of the plurality of deciders; (Para [0036], “the machine-learned vehicle action can be trained using supervised training techniques based on training data.”, “These labels can be utilized as ground-truth data to determine the accuracy and/or development of the vehicle action model as it is trained.”, Para [0078] and Fig. 2B, “The model trainer 254 can evaluate a training output 256 of the vehicle action model 138 to determine the accuracy and/or confidence level of the model as it is trained over time. The model trainer 254 can continue to train the vehicle action model 138 until a sufficient level of accuracy and/or confidence is achieved.”), and wherein updating the one or more additional ML model layers comprises backpropagating one or more of the losses across the one or more additional ML model layers. (Para [0115]) Gochev doesn’t explicitly disclose generating one or more of the losses comprises comparing the associated predicted probability distribution to the ground truth probability distribution, for each of the plurality of deciders, to generate one or more of the losses; However, Omari teaches generating one or more of the losses comprises comparing the associated predicted probability distribution to the ground truth probability distribution, for each of the plurality of deciders, to generate one or more of the losses ([0025], “ground-truth for determining the accuracy of the prediction may be generated.”, [0026]-[0027], [0032], “compare the predicted probability distribution generated by prioritization model 505 with a ground-truth probability distribution”, [0034], “the predicted contextual representation may then be compared to the known second contextual representation (i.e., the ground-truth at time t.sub.1). The comparison may be quantified by a loss value or computed using a loss function.”, [0046]); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the machine learning model used in the system and method as taught by Gochev in view of Amini, to include generating the losses by comparing the predicted probability distribution to the ground truth probability distribution as taught by Omari, with a reasonable expectation of success, in order to evaluate/measure the accuracy of the model in capturing uncertainty and making better probabilistic prediction. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAJAR HASSANIARDEKANI whose telephone number is (571)272-1448. The examiner can normally be reached Monday thru Friday 8 am-5 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at 5712707429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.H./Examiner, Art Unit 3669 /Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Jun 22, 2023
Application Filed
Apr 23, 2025
Non-Final Rejection — §103
Jul 21, 2025
Examiner Interview Summary
Jul 21, 2025
Applicant Interview (Telephonic)
Jul 23, 2025
Response Filed
Sep 11, 2025
Non-Final Rejection — §103
Nov 26, 2025
Applicant Interview (Telephonic)
Nov 26, 2025
Examiner Interview Summary
Dec 08, 2025
Response Filed
Feb 13, 2026
Final Rejection — §103
Apr 09, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584295
Work Machine
2y 5m to grant Granted Mar 24, 2026
Patent 12498714
SYSTEMS AND METHODS FOR UAV FLIGHT CONTROL
2y 5m to grant Granted Dec 16, 2025
Patent 12391273
METHOD AND COMPUTER SYSTEM FOR CONTROLLING THE MOVEMENT OF A HOST VEHICLE
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
88%
Grant Probability
62%
With Interview (-25.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month