Prosecution Insights
Last updated: April 19, 2026
Application No. 18/071,881

SYSTEM AND METHOD FOR RISK-BIASED TRAJECTORY FORECASTING

Non-Final OA §101§102§103§112
Filed
Nov 30, 2022
Examiner
NYE, LOUIS CHRISTOPHER
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Toyota Research Institute, Inc.
OA Round
1 (Non-Final)
22%
Grant Probability
At Risk
1-2
OA Rounds
3y 2m
To Grant
58%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
2 granted / 9 resolved
-32.8% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
27 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
38.3%
-1.7% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
3.9%
-36.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a risk-neutral latent space sampling module to sample a risk-neutral latent space…”, “a risk-neutral trajectory prediction module to predict… risk-neutral future surrounding agent trajectories…”, “a risk-biased latent space sampling module to sample a risk-biased latent space…”, and “a risk-biased trajectory prediction module to predict… risk-biased future surrounding agent trajectories…” in claim 17, “a controller module to perform a vehicle control action...” in claim 18, and “a planner to selects a reduced amount of prediction samples…” in claim 19. This interpretation applies to all claims depending therefrom. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 17-19 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification (see [0077-0085] and Table 1) does not disclose sufficient corresponding structure for the claimed functions of sampling risk-neutral and risk-biased latent spaces, predicting risk-neutral and risk-biased future surrounding agent trajectories (See MPEP 2181 (IV)). The specification does not disclose sufficient corresponding structure for performing a vehicle control action, and selecting a reduced amount of prediction samples (See MPEP 2181 (IV)). Thus, a person of ordinary skill in the art cannot determine how to perform the claimed functions, and the specification fails to demonstrate that the inventor was in possession of the claimed invention at the time of filing. Claim 20 incorporates by reference all the limitations of claim 17 and is rejected under 35 U.S.C. 112(a) for similar reasons. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim limitations “a risk-neutral latent space sampling module to sample a risk-neutral latent space…”, “a risk-neutral trajectory prediction module to predict… risk-neutral future surrounding agent trajectories…”, “a risk-biased latent space sampling module to sample a risk-biased latent space…”, and “a risk-biased trajectory prediction module to predict… risk-biased future surrounding agent trajectories…” in claim 17, “a controller module to perform a vehicle control action…” in claim 18, and “a planner to selects a reduced number of prediction samples…” in claim 19 invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed functions and to clearly link the structure, material, or acts to the function. No association between the structure and the functions can be found in the specification (see [0077-0085] and Table 1). The specification fails to clearly link the claimed functions to disclosed structures, materials, or acts (See MPEP 2181 (III)). Therefore, these claims are indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Claim 20 incorporates by reference all the limitations of claim 17 and is rejected under 35 U.S.C. 112(b) for similar reasons. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 is/are rejected under 35 U.S.C. 101 because they are directed to an abstract idea without significantly more. Regarding claims 1-20, Step 1: Applying step 1, the preamble of claims 1-8 recites a method which falls within the statutory category of a process. The preamble of claims 9-16 recites a non-transitory computer readable medium which falls within the statutory category of a manufacture. The preamble of claims 17-20 recites a system which falls within the statutory category of an apparatus. Regarding claim 1, Step 2A – Prong One: Claim 1 recites: A method of forecasting risk-biased trajectories of agents surrounding an ego vehicle, the method comprising: sampling a risk-neutral latent space generated by a trained encoder of a generative network based on past surrounding agent trajectories; predicting, based on the sampling of the risk-neutral latent space, risk-neutral future surrounding agent trajectories using a trained decoder of the generative network; sampling a risk-biased latent space distribution generated by a trained, risk- aware encoder of the generative network based on past trajectories of the ego vehicle and a risk-sensitivity; and predicting, based on the sampling of the risk-biased latent space distribution, risk-biased future surrounding agent trajectories using the trained decoder of the generative network. The broadest reasonable interpretation of the bolded limitations above are directed to a mental process able to be performed in the human mind. A human could use observation, evaluation, and judgement to neutrally predict the path or trajectory of surrounding agents of an ego vehicle and make a risk-biased prediction of the path or trajectory of surrounding agents of an ego vehicle. Step 2A – Prong One (Yes). Step 2A – Prong Two: The additional elements of the claim regarding “sampling a risk-neutral latent space generated by a trained encoder of a generative network based on past surrounding agent trajectories;” and “sampling a risk-biased latent space distribution generated by a trained, risk- aware encoder of the generative network based on past trajectories of the ego vehicle and a risk-sensitivity;” are insignificant extra-solution activities that amount to no more than mere data gathering (See MPEP 2106.05(g)). The additional elements of the claim regarding “using a trained decoder of the generative network;” are mere instructions to apply the judicial exception on a generic computer (See MPEP 2106.05(f)). Even when viewed in combination the additional elements do not integrate the judicial exception into practical application. Step 2A – Prong Two (No). Step 2B: As explained with respect to Step 2A, the additional elements of the claim regarding “sampling a risk-neutral latent space generated by a trained encoder of a generative network based on past surrounding agent trajectories;” and “sampling a risk-biased latent space distribution generated by a trained, risk- aware encoder of the generative network based on past trajectories of the ego vehicle and a risk-sensitivity;” are insignificant extra-solution activities that amount to no more than mere data gathering (See MPEP 2106.05(g)). Data gathering is well-understood, routine conventional activity as recognized by the courts (See MPEP 2106.05(d)(II)). The additional elements of the claim regarding “using a trained decoder of the generative network;” are mere instructions to apply the judicial exception on a generic computer (See MPEP 2106.05(f)). The computer is recited at a high-level of generality and imposes no meaningful limitations on the claim. Even when viewed in combination the additional elements do not amount to significantly more than the judicial exception. Step 2B (No). Regarding claims 9 and 17, These claims are similar in scope to claim 1 and are rejected under similar rationale as above. The processors and memory recited in these claims are also generic computing components. Claims 9 and 17 are ineligible. Dependent claims: Claims 2-3, 7-8, 10-11, 15-16, and 19-20: These claims recite additional elements that amount to further instructions to apply the judicial exception on a generic computer (See MPEP 2106.05(f)) and insignificant extra-solution activities that are mere data gathering (See MPEP 2106.05(g)). These additional elements, even when viewed in combination, do not integrate the judicial exception into practical application and do not amount to significantly more than the judicial exception and thus are ineligible. Claims 4, 12, and 18: These claims recite further abstract ideas (mental processes) and thus are ineligible. Claims 5-6 and 13-14: These claims recite further abstract ideas directed to mathematical concepts. Regarding claim 5, the limitations of “computing an expected cost associated with a predicted agent future trajectory and a predicted ego vehicle future trajectory; and computing a risk loss based on the computed expected cost; and computing a risk-biasing loss estimation based on the computed risk loss” are all mathematical calculations and thus fall within the mathematical concepts grouping of abstract ideas. Regarding claim 6, the limitation of “in which computing the expected cost further comprises overestimating a probability of human motion when an expected cost of human motion exceeds a predetermined value according to a current motion plan of the ego vehicle” is a mathematical calculation of probability and thus falls within the mathematical concepts grouping of abstract ideas. Claims 5 and 6 do not contain any additional elements that integrate the abstract idea into practical application and do not contain any additional elements that would amount to significantly more than the judicial exception. Claims 13-14 are rejected for similar reasons. These claims are ineligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 4, 8-10, 12, 16-18, and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Chen et al. (US Pub. No. 2023/0391374, hereinafter “Chen”). Regarding claim 1, Chen teaches a method of forecasting risk-biased trajectories of agents surrounding an ego vehicle, the method comprising: sampling a risk-neutral latent space generated by a trained encoder of a generative network based on past surrounding agent trajectories (Chen, [0062] – “The trajectory prediction model may comprise an encoder that may obtain encoded state and edge history as well as an encoded local map, and generate a discrete Gibbs distribution over the clique latent variable.”, [0075] – “the trajectory prediction model comprises the encoder 102. The encoder 102, also referred to as an encoder neural network, model, and/or variations thereof, may be one or more neural networks that process data to calculate a representation of the data. The encoder 102 may implemented in any suitable manner, such as through one or more data structures that encode a structure, configuration, and/or other information of the encoder 102. In an embodiment, the encoder 102 is a software program, application, system, or module that is part of or otherwise associated with the trajectory prediction model.”, [0076] – “The encoder 102 may obtain clique node history 104, which may be a set of data that indicates history of nodes (e.g., history of the states of the nodes) in a particular clique, in which the history may be in reference to one or more particular past time intervals or periods. In some embodiments, the encoder 102 calculates or otherwise obtains clique node history 104 in connection with the history of the nodes as described herein”, and [0088] – “The decoder 124 may perform discrete latent sampling 126, which may refer to one or more processes of selecting one or more values from the Gibbs distribution 120. In some examples, for the particular clique, the decoder 124 performs discrete latent sampling 126 by at least selecting or otherwise sampling a set of latent variables (e.g., z) corresponding to a set of agents of the particular clique from the Gibbs distribution 120.” – teaches sampling a risk-neutral latent space generated by a trained encoder (decoder 124 performs discrete latent sampling of Gibbs distribution generated by encoder 102) of a generative network (trajectory prediction model) based on past surrounding agent trajectories (clique node history)); predicting, based on the sampling of the risk-neutral latent space, risk-neutral future surrounding agent trajectories using a trained decoder of the generative network (Chen, [0092] – “ In an embodiment, for each agent of the particular clique, the decoder 124 outputs one or more trajectories, in which each trajectory corresponds to a respective particular combination of modes of agents in the particular clique. In some examples, the decoder 124 outputs one or more probability values associated with the one or more trajectories (e.g., obtained in connection with the factor graph 122 and/or the Gibbs distribution 120).” – teaches predicting risk-neutral future surrounding agent trajectories using a trained decoder of the generative network (decoder 124 generates trajectories for agents in a clique based on the sampling of the Gibbs distribution or factor graph generated by the encoder)); sampling a risk-biased latent space distribution generated by a trained, risk- aware encoder of the generative network based on past trajectories of the ego vehicle and a risk-sensitivity (Chen, [0096] – “In an embodiment, the trajectory prediction model is trained in connection with one or more conditional value at risk (CVaR) based loss functions, in which CVaR is defined through the following formula, although any variations thereof can be utilized: Eq. (5) in which P is the probability distribution of X and a tunes the level of risk-averseness. In an embodiment, CVaR is the mean of the lowest α-percentile values of x under P.… The trajectory prediction model may utilize CVaR to focus on the best predictions to maintain output diversity. During training, α may be used to trade-off the model's focus on encoder accuracy vs diversity. In addition to incorporating CVaR, the trajectory prediction model may utilize a greedy algorithm to diversely sample the product latent space.” - teaches sampling a risk-biased latent space distribution generated by a trained, risk-aware encoder of the generated network (trajectory prediction model including encoder is trained in connection with one or more CVaR loss functions, and in addition to incorporating CVaR the model may utilize a greedy algorithm to sample product latent space) based on a risk sensitivity (risk-averseness variable a), and based on past trajectories of the ego vehicle as in [0076] – “The encoder 102 may obtain clique node history 104, which may be a set of data that indicates history of nodes (e.g., history of the states of the nodes) in a particular clique, in which the history may be in reference to one or more particular past time intervals or periods. In some embodiments, the encoder 102 calculates or otherwise obtains clique node history 104 in connection with the history of the nodes as described herein”); and predicting, based on the sampling of the risk-biased latent space distribution, risk-biased future surrounding agent trajectories using the trained decoder of the generative network (Chen, [0092] – “The decoder 124 may output predicted trajectories for agents of each clique in the scene (e.g., denoted as s.sub.pred). In some examples, the predicted trajectories are in reference to a particular future time interval. In an embodiment, for each agent of the particular clique, the decoder 124 outputs one or more trajectories, in which each trajectory corresponds to a respective particular combination of modes of agents in the particular clique.” and in [0096] – “The trajectory prediction model may utilize CVaR to focus on the best predictions to maintain output diversity. During training, α may be used to trade-off the model's focus on encoder accuracy vs diversity. In addition to incorporating CVaR, the trajectory prediction model may utilize a greedy algorithm to diversely sample the product latent space” – teaches predicting, based on the sampling of the risk-biased latent space distribution, risk-biased future surrounding agent trajectories using the trained decoder of the generative network (decoder 124 of trajectory prediction model agent outputs predicted trajectories for agents of each clique, based on the sampling of the risk-biased latent space distribution utilizing CVaR)). Claims 9 and 17 incorporate substantively all the limitations of claim 1 in a non-transitory computer-readable storage medium and a system, and are rejected on the same grounds as above. Regarding claim 2, Chen teaches the method of claim 1, further comprising training the risk-aware encoder of the generative network to learn the risk-biased latent space distribution based on past and/or future trajectories of the ego vehicle and the risk-sensitivity (Chen, [0096] – “In an embodiment, the trajectory prediction model is trained in connection with one or more conditional value at risk (CVaR) based loss functions, in which CVaR is defined through the following formula, although any variations thereof can be utilized: Eq. (5) in which P is the probability distribution of X and a tunes the level of risk-averseness. In an embodiment, CVaR is the mean of the lowest α-percentile values of x under P.… During training, α may be used to trade-off the model's focus on encoder accuracy vs diversity.” – teaches training the risk-aware encoder of the generative network (trajectory prediction model including encoder is trained with one or more conditional value at risk based loss functions) to learn the risk-biased latent space distribution based on the risk sensitivity (risk-averseness variable a) and based on past trajectories of the ego vehicle (as in [0076]) and/or future trajectories of the ego vehicle as in [0094] – “the trajectory prediction model is trained in connection with Evidence Lower Bound (ELBO) loss, which is denoted by following formula, although any variations thereof can be utilized [ELBO formula] in which z is the clique latent variable, y is the future trajectories of all nodes, and x is the conditional variable, consisting of node and edge history, map encoding, and lane information for all nodes in the clique.” – teaches training the trajectory prediction model including the encoder with ELBO loss which utilizes y as the future trajectories of all nodes and x which consists of node and edge history for all nodes in the clique). Claim 10 is similar to claim 2, hence similarly rejected. Regarding claim 4, Chen teaches the method of claim 1, further comprising performing a vehicle control action to maneuver the ego vehicle according to the risk-biased future surrounding agent trajectories (Chen. [0122] – “the trajectory prediction model is part of or otherwise associated with the one or more autonomous vehicle systems. The one or more autonomous vehicle systems may be in accordance with those described in connection with FIGS. 12A-12D. The one or more autonomous vehicle systems may utilize the predicted trajectories to calculate how to navigate a vehicle. As an illustrative example, the one or more autonomous vehicle systems utilize the predicted trajectories to calculate how to navigate a vehicle through the scene to avoid collisions with agents of the scene. The one or more autonomous vehicle systems may utilize the predicted trajectories and associated probability values to determine likely trajectories of agents in the scene, and cause a vehicle to move based on the determination (e.g., to avoid colliding with the agents of the scene).” – teaches performing a vehicle control action to maneuver the ego vehicle (vehicle being caused to move according to predicted trajectory is an ego vehicle as in [0124-0125]) according to the risk-biased future surrounding agent trajectories (one or more autonomous vehicle systems may utilize predicted trajectories to navigate a vehicle through a scene to avoid collisions with agents of the scene)). Claims 12 and 18 are similar to claim 4, hence similarly rejected. Regarding claim 8, Chen teaches the method of claim 1, in which the trained encoder comprises a conditional variational auto-encoder (CVAE) encoder and the trained decoder comprises a CVAE decoder (Chen, [0064] – “In an embodiment, the trajectory prediction model is a discrete conditional variational autoencoder (CVAE) model that outputs joint trajectory predictions for multiple agents in a scene, ensuring high scene consistency by reasoning about each agent's motion policy and the influence of their neighbors” – teaches the trained encoder and trained decoder comprise a CVAE encoder and CVAE decoder, respectively (the trajectory prediction model is a discrete CVAE model)). Claims 16 and 20 are similar to claim 8, hence similarly rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Fidler et al. (US Pub. No. 2022/0067983, hereinafter “Fidler”). Regarding claim 3, Chen teaches the method of claim 1 further comprising: training a generative predictive model, including an encoder and a decoder, to determine a probability of an event occurring (Chen, [0064] – “In at least one embodiment, the trajectory prediction model comprises an encoder 102 and a decoder 124, and can comprise various components not depicted in FIG. 1. In an embodiment, one or more systems utilize the trajectory prediction model to generate one or more trajectory predictions of one or more agents in a scene. The trajectory prediction model may be in accordance with those described in connection with FIGS. 2-8. In an embodiment, the trajectory prediction model is a discrete conditional variational autoencoder (CVAE) model that outputs joint trajectory predictions for multiple agents in a scene, ensuring high scene consistency by reasoning about each agent's motion policy and the influence of their neighbors.” – teaches a generative predictive model (trajectory prediction model), including an encoder and a decoder, trained to determine a probability of an event occurring as in [0096] – “the trajectory prediction model is trained in connection with one or more conditional value at risk (CVaR) based loss functions, in which CVaR is defined through the following formula, although any variations thereof can be utilized: Eq. (5) in which P is the probability distribution of X and a tunes the level of risk-averseness. In an embodiment, CVaR is the mean of the lowest α-percentile values of x under P.”); replacing the trained encoder with the risk-aware encoder (Chen, [0098] – “The one or more systems may update one or more components of the trajectory prediction model (e.g., of the encoder 102 and/or decoder 124) such that calculated loss is minimized. The one or more systems may update or otherwise train one or more components of the trajectory prediction model through one or more functions and/or processes such as those described herein and in connection with FIG. 10. The one or more systems may continuously cause the trajectory prediction model to process the training data, or portions of the training data, calculate loss based on results of the processing by the trajectory prediction model, and update one or more components of the trajectory prediction model until loss is below a defined threshold, accuracy of the trajectory prediction model is above a defined threshold, and/or until any suitable event based on any suitable metric associated with the trajectory prediction model.” – teaches replacing the trained encoder with the risk-aware encoder (encoder may update one or more components of the trajectory prediction model, such as the encoder, to minimize calculated loss)); training the risk-aware encoder with an added constraint of risk estimation (Chen, [0096] – “the trajectory prediction model is trained in connection with one or more conditional value at risk (CVaR) based loss functions, in which CVaR is defined through the following formula, although any variations thereof can be utilized: Eq. (5) in which P is the probability distribution of X and a tunes the level of risk-averseness. In an embodiment, CVaR is the mean of the lowest α-percentile values of x under P… During training, α may be used to trade-off the model's focus on encoder accuracy vs diversity.” – teaches training the risk aware encoder with an added constraint of risk estimation (trains using CVaR based loss functions, tunes risk-averseness variable a to trade off focus on encoder accuracy vs diversity)); and operating the trained decoder to predict events with a focus on the events that have a high cost or risk (Chen, [0092] – “The decoder 124 may output predicted trajectories for agents of each clique in the scene (e.g., denoted as s.sub.pred). In some examples, the predicted trajectories are in reference to a particular future time interval. In an embodiment, for each agent of the particular clique, the decoder 124 outputs one or more trajectories, in which each trajectory corresponds to a respective particular combination of modes of agents in the particular clique.” and in [0098] – “The one or more systems may continuously cause the trajectory prediction model to process the training data, or portions of the training data, calculate loss based on results of the processing by the trajectory prediction model, and update one or more components of the trajectory prediction model until loss is below a defined threshold, accuracy of the trajectory prediction model is above a defined threshold, and/or until any suitable event based on any suitable metric associated with the trajectory prediction model.” – teaches operating the trained decoder to predict events with a focus on the events that have a high cost or risk (decoder outputs predicted trajectories of agents in the scene and is operated to predict any suitable event based on any suitable metric associated with the trajectory prediction model)). Chen fails to explicitly teach fixing the trained decoder such that the trained decoder is no longer training; and training the risk-aware encoder to emulate the trained encoder. However, analogous to the field of the claimed invention, Fidler teaches: fixing the trained decoder such that the trained decoder is no longer training (Fidler, [0074] – “Freezing previously learnt decoder 112, stated as p.sub.w.sub.1(y|z), a new encoder 108 may be learned.” and in [0091] – “In at least one embodiment, said freezing comprises halting changes to various weights or parameters of said decoder, while training to other portions of one or more neural networks associated with said decoder continues.” – teaches fixing the trained decoder such that the trained decoder is no longer training (learnt decoder 112 is frozen, thus the trained decoder is no longer training)); replacing the trained encoder with the risk-aware encoder (Fidler, [0065] – “given a dataset custom-character={y.sub.i}.sub.i=1.sup.N, an amodal-vae 104 learns a latent variable generative model p(y, z)=p.sub.w.sub.1 (y|z)p(z), where p(z) is a prior distribution over latent variables and p.sub.w.sub.1(y|z) is a likelihood distribution, usually interpreted as a decoder and typically parametrized by a neural network with parameters w.sub.1. In at least one embodiment, a true posterior distribution p(y, z) may be intractable, and amodal-vae 104 instead employs an auxiliary approximate posterior distribution or encoder q.sub.w.sub.2(z|y), parametrized by another neural network with parameters w.sub.2.” – teaches replacing the trained encoder with the risk-aware encoder (amodal-vae employs an auxiliary encoder instead of the initial encoder)); training the risk-aware encoder to emulate the trained encoder with an added constraint (Fidler, [0065] – “In at least one embodiment, a true posterior distribution p(y, z) may be intractable, and amodal-vae 104 instead employs an auxiliary approximate posterior distribution or encoder q.sub.w.sub.2(z|y), parametrized by another neural network with parameters w.sub.2. In at least one embodiment, when additional information about training data is available, such as a sample's classes or category c, amodal-vae 104 is extended to be a conditional variational autoencoder, in which an encoder portion and decoder portion are conditioned on this class information.” – teaches training an encoder to emulate the trained encoder with an added constraint (amodal-vae 104 employs an auxiliary encoder, parametrized by another neural network, and amodal-vae is extended to be a conditional variational autoencoder in which an encoder portion is conditioned on class information, an added constraint)); Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the fixing of the decoder and training of an encoder to emulate the trained encoder of Fidler to the risk-aware encoder and decoder operation of Chen in order to fix the decoder and replace the trained encoder with a risk constrained encoder. Doing so would condition an encoder and decoder according to a category (Fidler, [0065]) and enable the system to train a new encoder while the corresponding decoder is fixed (Fidler, [0092]). Claim 11 is similar to claim 3, hence similarly rejected. Claim(s) 5-7, 13-15, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Cui et al. (US Pub. No. 2022/0153309, hereinafter “Cui”). Regarding claim 5, Chen teaches the method of claim 1, in which predicting the risk-biased future surrounding agent trajectories comprises: computing a risk loss based on the computed expected cost (Chen, [0096] – “In an embodiment, the trajectory prediction model may utilize one or more loss functions to prevent or otherwise mitigate mode collapse, which refers to a process in which a decoder tends to predict similar trajectories under different modes since the likelihood cost is a weighted sum of 2-norm errors and the average prediction is likely to be a local minimum. In an embodiment, the trajectory prediction model is trained in connection with one or more conditional value at risk (CVaR) based loss functions, in which CVaR is defined through the following formula, although any variations thereof can be utilized: Eq. (5)” – teaches computing a risk loss (CVaR) based on the computed expected cost (in Eq. (5), cost is E[X])); and computing a risk-biasing loss estimation based on the computed risk loss (Chen, [0096] – “In an embodiment, the trajectory prediction model is trained in connection with one or more loss functions denoted through the following formula, although any variations thereof can be utilized: Eq. (7) in which the formula is the best α-percentile loss value among the discrete modes, and the CVaR loss does not force all modes to match the ground truth, only those that are already close, directly preventing mode collapse. The trajectory prediction model may utilize CVaR to focus on the best predictions to maintain output diversity.” – teaches computing a risk-biasing loss estimation based on the computed risk loss (Eq. (7) is based on computed risk-loss based on CVaR loss function and expected cost)). Chen fails to explicitly teach computing an expected cost associated with a predicted agent future trajectory and a predicted ego vehicle future trajectory. However, analogous to the field of the claimed invention, Cui teaches: computing an expected cost associated with a predicted agent future trajectory and a predicted ego vehicle future trajectory (Cui, [0087] – “In some implementations, a planner cost function can be used in the contingency planner 320 where the planner cost function can be a linear combination of various subcosts that encode different aspects of driving (e.g., comfort, traffic-rules, route, etc.) In particular, collision subcosts can personalize an SDV trajectory if it overlaps with the predicted trajectories of other actors or has high speed in close distance to them. Even more particularly, trajectories that violate a headway buffer to the lead vehicle can be penalized.” – teaches computing an expected cost (cost function is a linear combination of subcosts) with a predicted future trajectory and a predicted ego vehicle future trajectory (collision subcost is based on overlap of other actors’ predicted trajectories with self-driving vehicle, or ego vehicle, thus computing an expected cost associated with predicted agent trajectory and predicted ego vehicle trajectory)); Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the expected cost calculation of Cui to the risk calculations of Chen in order to determine an expected cost between the ego vehicle and surrounding agent trajectories. Doing so would improve motion forecast and reduce computer resource usage, and reflect future interactions among multiple agents in a scene (Cui, [0005]). Claim 13 is similar to claim 5, hence similarly rejected. Regarding claim 6, the combination of Chen and Cui teaches the method of claim 5, in which computing the expected cost further comprises overestimating a probability of human motion when an expected cost of human motion exceeds a predetermined value according to a current motion plan of the ego vehicle (Chen, [0122] – “The one or more autonomous vehicle systems may utilize the predicted trajectories to calculate how to navigate a vehicle. As an illustrative example, the one or more autonomous vehicle systems utilize the predicted trajectories to calculate how to navigate a vehicle through the scene to avoid collisions with agents of the scene. The one or more autonomous vehicle systems may utilize the predicted trajectories and associated probability values to determine likely trajectories of agents in the scene, and cause a vehicle to move based on the determination (e.g., to avoid colliding with the agents of the scene).” and in [0124] – “ the MPC may plan M corresponding ego trajectories with the additional constraint that the first control inputs for all M ego trajectories must be the same, which can be denoted by following formulas, although any variations thereof can be utilized: Eq. (8) in which π.sub.i is the probability of prediction mode i, s′ and u′ are the planned state and input sequences of the ego vehicle under the i-th mode, custom-character is the cost function, and C is a constraint (e.g. collision avoidance), in which said formulas are a nonlinear optimization problem and is solved with any suitable process such as those of interior point optimizer (IPOPT), or any suitable optimizer.” – teaches overestimating a probability of human motion (systems utilize predicted trajectories to calculate how to navigate a vehicle to avoid collisions, thus overestimating a probability of human motion) when an expected cost of human motion exceeds a predetermined value according to a current motion plan of the ego vehicle (constraint C, which is the constraint for collision avoidance, is the predetermined value that when exceeded causes the system to calculate how to navigate the vehicle to avoid collisions)). Claim 14 is similar to claim 6, hence similarly rejected. Regarding claim 7, Chen teaches the method of claim 1. Chen fails to explicitly teach in which a planner of the ego vehicle selects a reduced amount of prediction samples during prediction. However, analogous to the field of the claimed invention, Cui teaches: in which a planner of the ego vehicle selects a reduced amount of prediction samples during prediction (Cui, [0090] – “ In some implementations, a sampling approach can be taken in the contingency planner 320 to solve minimization (e.g., minimization used in the actor convolutional neural network 312). In particular, a set of pairs can be generated which include possible short-term trajectories (e.g., single immediate action 322) and their possible subsequent set of trajectories 324. Specifically, a dense set of initial actions can be considered such that the final executed trajectory can be smooth and comfortable.” – teaches a planner of the ego vehicle selecting a reduced amount of prediction samples during prediction (planner 320 takes sampling approach to solve minimization, selects a dense set of initial actions)). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the planner of Cui to the method of Chen in order to select a reduced amount of samples during prediction. Doing so would find a proper contingent plan for the future and obtain a more accurate cost-to-go for the initial action (Cui, [0090]). Claims 15 and 19 are similar to claim 7, hence similarly rejected. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kim et al. (US Pub. No. 2024/0025455) teaches systems and method for determining a driving plan of an autonomous vehicle on the basis of road user prediction. Teaches determining a driving plan by accurately predicting future actions of road users near the autonomous vehicle, such as vehicles, pedestrians, bikers, motorcycles, or the like. Teaches conservative prediction of future actions of surrounding agents to guide the autonomously vehicle safely. Jiang et al. (US Pub. No. 2024/0025445) teaches a safety planning system that utilizes anomaly detection and a conditional variational auto-encoder to detect anomalous objects in the path of an autonomous vehicle. Identified anomalous objects can be identified as high risk to surrounding vehicles. Teaches performing a vehicle control action to maneuver the vehicle to avoid identified high risk anomalies. Gao et al. (NPL: “Social-DualCVAE: Multimodal Trajectory Forecasting Based on Social Interactions Pattern Aware and Dual Conditional Variational Auto-Encoder”) teaches a dual-conditional variational auto-encoder model for trajectory forecasting. Teaches training the encoder on historical and future trajectories. Liu et al. (NPL: “Interactive Trajectory Prediction Using a Driving Risk Map-Integrated Deep Learning Method for Surrounding Vehicles on Highways”) teaches utilizing a conditional variational auto-encoder to generate candidate trajectories, assigns probabilities to each candidate trajectory, and randomly selects one of the candidate trajectories to reflect driver intention uncertainty. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOUIS C NYE whose telephone number is 571-272-0636. The examiner can normally be reached Monday - Friday 9:00AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOUIS CHRISTOPHER NYE/Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Nov 30, 2022
Application Filed
Jan 08, 2026
Non-Final Rejection — §101, §102, §103
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 09, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524683
METHOD FOR PREDICTING REMAINING USEFUL LIFE (RUL) OF AERO-ENGINE BASED ON AUTOMATIC DIFFERENTIAL LEARNING DEEP NEURAL NETWORK (ADLDNN)
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
22%
Grant Probability
58%
With Interview (+35.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month