DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/04/2026 has been entered.
Status of Claims
Claims 1-10, 13-14, 17-23, 26-27, and 29-37 filed on 01/06/2026 are presently examined. Claims 11-12, 15-16, 24-25, and 28 are cancelled. Claims 1, 14, and 20 are amended.
Response to Arguments
Regarding 35 USC 101, Applicant's amendments filed 09/11/2025 to the independent claims result in the withdrawal of the 35 USC 101 rejection.
Regarding 35 USC 102, Applicant’s amendments result in the withdrawal of the 102 rejection, and its replacement with a 103 rejection.
Regarding 35 USC 103, Applicant's arguments filed 01/06/2026 have been fully considered. Some arguments are unpersuasive, while others are moot. See details below:
Applicant argues Ramamoorthy does not disclose a reactivity model assigned to each agent used to determine the reactions of the agent vehicles. Examiner respectfully disagrees. The claim does not specify that the reactivity models are unique between agent vehicles. Applying one reaction prediction to all agent vehicles accomplishes the concept of assigning reactivity models to each agent vehicle. Further, Ramamoorthy also discloses a plurality of possible goals for each external actor, and basing the anticipated behavior on probabilistic or deterministic goal recognition [0036]. This constitutes as a reactivity model. Further, as an Examiner’s note: new reference Gall teaches the unique behavior profiles for agent vehicles (including at least passive to aggressive), which would accomplish unique reactivity models, but is not used to teach this limitation in this Action. This note is for the Applicant to consider if amendments are considered after this Action.
Applicant argues Ramamoorthy does not teach projecting the paths of the agent vehicles onto the path of the ego vehicle and modulating each step of the vehicle and agent vehicles using the projected paths. Examiner respectfully disagrees. Ramamoorthy applies the predicted trajectories of the agent vehicles onto the ego vehicle, and modulates each step in time of both the ego vehicle and agent vehicle in the simulation of behaviors.
Applicant argues Ramamoorthy does not teach determining interactions of the agent vehicles with the ego vehicle based on the reaction of the ego vehicle and the agent vehicles. Examiner agrees, however, this argument is moot. New reference Gall teaches the new limitation:
determine interactions between a subset of the one or more agent vehicles with the vehicle based on a reaction of the vehicle and the one or more reactions of the one or more agent vehicles.
Gall teaches an agent behavior prediction system and determination of ego vehicle actions based on the predicted actions and current action of the ego vehicle and subsequent predicted reactions and updated reactions of the agent vehicles and ego vehicle.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-10, 13-14, 17-23, 26-27, and 29-37 are rejected under 35 U.S.C. 103 as being unpatentable over Ramamoorthy et al. (US 20210370980 A1) in view of Gall et al. (US 20240051581 A1), hereinafter referred to as Ramamoorthy and Gall, respectively.
Regarding claims 1 and 20, Ramamoorthy discloses An apparatus for decision making for operation of at least one vehicle ([0001] “disclosure relates to autonomous vehicle (AV) manoeuvre planning.”), the apparatus comprising:
at least one memory ([0085] “electronic storage of an autonomous vehicle”); and
at least one processor coupled to the at least one memory ([0439] “one or more processors”) and configured to:
determine a plurality of prediction trajectories relative to a path of a vehicle within a simulated scene over time at time steps ([0217] “hypothesises different possible goals for each external agent and then generates trajectories of how the agent might achieve each goal, and the likelihood that it would follow each trajectory.” [0227] “simulates future behaviour of the other vehicle … the expected trajectory model indicates how likely it is that the other vehicle will take a particular path or paths (trajectories) within a given time period Δt” [0245] “The above steps are performed repeatedly over time,”), wherein
each prediction trajectory of the plurality of prediction trajectories comprises at least one location of an agent vehicle of one or more agent vehicles that change as a function of time ([0127] “the data processing component A2 provides a comprehensive representation of the ego vehicle's surrounding environment, the current state of any external actors within that environment (location, heading, speed etc. to the extent they are detectable)” the trajectory prediction would begin at the current location of the external actor. [0244] “the path distribution at time t+Δt … can itself be determined by e.g. re-executing the inverse planner A24 at time t+Δt for goal G.sub.i as above, but with a new determined location of the external agent r.sub.t+Δt at time t+Δt, in order to update the expected trajectory model accordingly.”);
determine one or more reactions of the one or more agent vehicles based on at least one future position of the vehicle and each respective reactivity model, associated with a respective traffic context, assigned to each agent vehicle ([0132] “A function of the prediction component A4 is to model predicted external agent behaviours to be run as part of the simulations. That is, to execute an external agent behaviour model for predicting the behaviour of any external actors in the encountered driving scenario so that the predicted behaviour can be incorporated into the simulations on which manoeuvre planning is based.” [0036] “determining a set of available goals for the external actor in the encountered driving scenario, and applying probabilistic or deterministic goal recognition in respect of the set of available goals, in order to simulate the anticipated behaviour.”);
project paths of the subset of the one or more agent vehicles onto the path of the vehicle to generate projected paths of the one or more agent vehicles; store, in the at least one memory, the projected paths of the one or more agent vehicles ([0078] “The goal recognition may be probabilistic, and a goal distribution may be determined by comparing the best-available trajectory model with the optimal trajectory model for each goal.” [0212] To assist the AV planner A6 in making AV planning decisions, such as determining a sequence of manoeuvres, actions etc. to be taken by the ego vehicle to execute a defined goal safely and effectively, the inverse planner A24 predicts the movement of nearby external actors, which may be referred to as agents in the present context.);
modulate each step of the vehicle and the one or more agent vehicles using the stored projected paths of the one or more agent vehicles (According to Applicant’s specification [0070] “each scene index” is every timestep. [0147] “simulation is run based on the following: the extracted driving scenario description parameters; the parent state (this may for example be used as a starting state for the simulation); the corresponding manoeuvre between the parent and child nodes (the performance of which is simulated between time t and t+Δt); and simulated external agent behaviour between time t and t+Δt (as modelled by the prediction component A4).” [0148] “The performance of the corresponding manoeuvre by the ego vehicle in the interval ΔT is simulated by simulating or “rolling out” actions that the AV planner A6 would take in real-life given the state of the driving scenario represented by the parent node, the external agent behaviour in that time interval Δt, and the manoeuvre to be performed.”);
update the simulated scene to produce a forward simulated scene based on modulation of each step of the vehicle and the one or more agent vehicles, wherein the forward simulated scene is represented as a node within a tree for a tree search, and wherein the plurality of prediction trajectories for the forward simulated scene are encoded within the node within the tree ([0004] “updating the driving scenario state of its parent node based on (i) a candidate AV manoeuvre and (ii) an anticipated behaviour of at least one external agent (actor) in the encountered driving scenario.” [0009] “the game tree has a plurality of nodes representing anticipated states of the encountered driving scenario, and the anticipated driving scenario state of each child node is determined by updating the driving scenario state of its parent node based on (i) a candidate AV manoeuvre and (ii) an anticipated behaviour of at least one external agent (actor) in the encountered driving scenario.”);
determine one or more prediction paths for the vehicle based on the forward simulated scene ([0003] “reason about the possible effect of different sequences of manoeuvres in a driving scenario it has encountered, taking into account the anticipated behaviour of other vehicles/agents, so that it may determine a suitable sequence of manoeuvres (ego vehicle maneuvers) to be executed in that scenario.”) and
output a control signal to cause the vehicle to execute a navigation action according to the one or more prediction paths ([0008] “generating AV control signals for executing the determined sequence of AV manoeuvres”).
Ramamoorthy fails to explicitly disclose determine interactions between a subset of the one or more agent vehicles with the vehicle based on a reaction of the vehicle and the one or more reactions of the one or more agent vehicles. Ramamoorthy does not explicitly disclose determining interactions of the agent vehicles and the ego vehicle based on a reaction of the vehicle and agent vehicles. Ramamoorthy, as shown above, predicts reactions of agent vehicles and the current anticipated action of the ego vehicle.
However, Gall teaches determine interactions between a subset of the one or more agent vehicles with the vehicle based on a reaction of the vehicle and the one or more reactions of the one or more agent vehicles ([0117] “updating, using the at least one processor, based on the update of the agent prediction, the predicted agent trajectory. Accordingly, a subsequent determination of the action for the AV can be based on the updated agent prediction and the updated predicted agent trajectory.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ramamoorthy with Gall’s teaching of updating interactions and reactions between agent vehicles and the ego vehicle. One would be motivated, with reasonable expectation of success, to use updated interactions and reactions in order to determine a subsequent action for the ego vehicle ([0117] “a subsequent determination of the action for the AV can be based on the updated agent prediction and the updated predicted agent trajectory”).
Regarding claims 2 and 21, Ramamoorthy discloses The apparatus of claim 1, wherein each prediction trajectory of the plurality of prediction trajectories is represented by one dimension or two dimensions ([FIG 5A] both one and two dimensions are shown for expected trajectories of external actors. [0243] “A trajectory can be a simple spatial path but the description applies equally to trajectories that incorporate motion information (such speed/velocity information, acceleration).” Dimension in this context being interpreted as the trajectory is in one or two dimensions.).
Regarding claims 3 and 31, Ramamoorthy discloses The apparatus of claim 2, wherein the one dimension is longitudinal distance ([0264] “A “target” actor means an external actor whose behaviour is being predicted. Predictions may be made for one or multiple target vehicles (or other actors) … they take sensor feedback into account and automatically vary their speed and distance to a leading vehicle”).
Regarding claims 4 and 32, Ramamoorthy discloses The apparatus of claim 2, wherein the two dimensions are longitudinal distance and lateral distance (([0264] “… automatically vary their speed and distance to a leading vehicle”) [0334] “The goal location … could correspond to a particular distance along the road.” [FIG 5A] both one and two (longitudinal and lateral) dimensions are shown for expected trajectories of external actors.).
Regarding claims 5 and 22, Ramamoorthy discloses The apparatus of claim 1, wherein each prediction trajectory of the plurality of prediction trajectories is represented within the simulated scene by one or more symbols ([FIGs 5A and 5B] the paths have a graphical representation within the simulated scene.).
Regarding claims 6 and 33, Ramamoorthy discloses The apparatus of claim 5, wherein the one or more symbols are each one of a circle, a square, a rectangle, a triangle, or a polygon ([FIG. 5C] path includes square shape.).
Regarding claims 7 and 34, Ramamoorthy discloses The apparatus of claim 1, wherein the path is a localized corridor ([FIGs 10 and 14A-14C] these are each examples of localized corridors. Similar to applicant’s specification description of localized corridor [0064] “a path (e.g., a localized corridor, such as the straight path of graph 350 of FIG. 3)”) .
Regarding claims 8 and 35, Ramamoorthy discloses The apparatus of claim 1, wherein a traffic context is based on at least one of leader-follower relationships in traffic or vehicles-within-a-set-of-adjacent-lanes in the traffic ([at least 0264] “Predictions may be made for one or multiple target vehicles … they take sensor feedback into account and automatically vary their speed and distance to a leading vehicle”).
Regarding claims 9 and 23, Ramamoorthy discloses The apparatus of claim 1, wherein the one or more reactions of the one or more agent vehicles comprises at least one of one or more longitudinal reactions or one or more lateral reactions ([at least 0264] “Predictions may be made for one or multiple target vehicles … they take sensor feedback into account and automatically vary their speed and distance to a leading vehicle” which is at least a longitudinal reaction.).
Regarding claims 10 and 36, Ramamoorthy discloses The apparatus of claim 9, wherein the one or more longitudinal reactions comprises at least one of accelerating or decelerating ([at least 0264] “Predictions may be made for one or multiple target vehicles … they take sensor feedback into account and automatically vary their speed and distance to a leading vehicle” Varying speed would require accelerating or decelerating.).
Regarding claims 13 and 26, Ramamoorthy discloses The apparatus of claim 1, wherein the at least one processor is configured to modulate each step of the vehicle and the one or more agent vehicles at each scene index (According to Applicant’s specification [0070] “each scene index” is every timestep. Ramamoorthy: [0148] “The performance of the corresponding manoeuvre by the ego vehicle in the interval ΔT is simulated by simulating or “rolling out” actions that the AV planner A6 would take in real-life given the state of the driving scenario represented by the parent node, the external agent behaviour in that time interval Δt, and the manoeuvre to be performed.”).
Regarding claims 14 and 27, Ramamoorthy discloses The apparatus of claim 1, wherein the at least one processor is configured to perform branching of the forward simulated scene using the tree search ([0009] “the game tree has a plurality of nodes representing anticipated states of the encountered driving scenario, and the anticipated driving scenario state of each child node is determined by updating the driving scenario state of its parent node based on (i) a candidate AV manoeuvre and (ii) an anticipated behaviour of at least one external agent (actor) in the encountered driving scenario.” [0398] “A collision checker is applied to check whether the ego vehicle collides with any of the other vehicles during the forward simulation. If there is a collision, that branch in the search tree is immediately “cut” (i.e. no longer explored).”).
Regarding claims 17 and 37, Ramamoorthy discloses The apparatus of claim 14, wherein the tree search is a Monte Carlo Tree Search (MCTS) ([0025] “The tree search algorithm may be a Monte Carlo Tree Search (MCTS) algorithm.”).
Regarding claims 18 and 29, Ramamoorthy discloses The apparatus of claim 1, wherein the at least one processor is configured to assign costs to each respective distance between the vehicle and the one or more agent vehicles ([0399] “the cost function is applied to the trajectory generated by the maneuvers” [0270] “maneuver policies are defined in a way that encompasses low-level planning such as velocity and distance.” [0278] “a lane change left is only possible if there is a lane to the left of the car, and if there is sufficient open space on that lane for the car.” [0418] “if Car 2 is close to ego car (as in picture), then the ego car may decide to slow down and keep a distance to Car 2.”).
Regarding claims 19 and 30, Ramamoorthy discloses The apparatus of claim 1, wherein each prediction trajectory of the plurality of prediction trajectories further comprises at least one position of the vehicle ([0127] “the data processing component A2 provides a comprehensive representation of the ego vehicle's surrounding environment, the current state of any external actors within that environment (location, heading, speed etc. to the extent they are detectable)” the trajectory prediction would begin at the current location of the external actor. [0161] “if the vehicle is at a particular location relative to a T-junction (corresponding to the parent state), there may be three possible manoeuvres, to stop, turn left, and turn right, but continuing straight would not be an option.” Takes into account the location of the vehicle.).
Regarding claim 28, Ramamoorthy discloses The method of claim 27, wherein the forward simulated scene is represented as a node within a tree for the tree search, and wherein the plurality of prediction trajectories for the forward simulated scene are encoded within the node within the tree ([0009] “the game tree has a plurality of nodes representing anticipated states of the encountered driving scenario, and the anticipated driving scenario state of each child node is determined by updating the driving scenario state of its parent node based on (i) a candidate AV manoeuvre and (ii) an anticipated behaviour of at least one external agent (actor) in the encountered driving scenario.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK R HEIM whose telephone number is (571)270-0120. The examiner can normally be reached M-F 9-6 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.R.H./Examiner, Art Unit 3668
/Fadey S. Jabr/Supervisory Patent Examiner, Art Unit 3668