Prosecution Insights
Last updated: April 19, 2026
Application No. 18/527,630

PREDICTING THE FURTHER DEVELOPMENT OF A SCENARIO WITH AGGREGATION OF LATENT REPRESENTATIONS

Final Rejection §102§103
Filed
Dec 04, 2023
Examiner
HINTON, HENRY R
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
35 granted / 46 resolved
+24.1% vs TC avg
Strong +34% interview lift
Without
With
+33.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
70
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 46 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The 01.14.2026 amendments are entered. Claims 1, 3-6, 9, 14, 15, and 17-18 are amended. Claims 10-13 are withdrawn. No claims are canceled. No claims are newly added. Claims 1-19 remain pending. The Claim Objections The objections to claims 14-15 are withdrawn in light of the amendments made. The 35 U.S.C. § 112 Rejections The rejections of claims 3, 5, and 9 over indefiniteness are withdrawn in light of the amendments made. The 35 U.S.C. § 101 Rejections The rejections of claims 1-8 and 14-15 as being directed to ineligible subject matter are withdrawn in light of the amendments made. Response to Arguments Applicant’s arguments in the 01.14.2026 Remarks (“Remarks”) with respect to claims 1-4 and 14-15 under 35 U.S.C. § 102 and claims 5-9, 16-19 under 35 U.S.C. § 103 have been fully considered but are found unconvincing for the reasons below. The 35 U.S.C. § 102 Rejections Applicant contends on pp. 9-10 of the Remarks that amending “latent representations” to “latent representations” overcomes Choi because the encodings of past trajectories, represented by vectors, are not latent variables. Because of this, Applicant further contends that Choi’s disclosure of latent variables does not read on how the latent representations of the present invention are processed and predicted. The examiner respectfully disagrees. First, the examiner clarifies that in rejecting Claim 1, it is not the encodings of the past trajectories HXi that are interpreted as latent representations. Rather, the trajectories X̂ and Ŷ themselves are taken as latent representations and predictions for latent representations, respectively. Furthermore, while Choi teaches latent variables of one type, this does not force the term “latent representation” in claim 1 to be interpreted as one such variable. The broadest reasonable interpretation of a “latent representation” formed by “processing measured observations . . . using an encoder” includes the product of converting sensor data into trajectories as done to generate input trajectories X̂ in Choi. In other words, X̂ is an encoded latent representation of past trajectories. Ŷ, therefore, is understood as a latent representation of future trajectories (i.e., “predictions for latent representations”). Applicant further contends on p. 10 of the Remarks that Choi, therefore, does not disclose “to aggregate the latent representations from a specified time horizon prior to the current point in time t to form the processing product.” However, this argument is also found unconvincing. Given that X̂ and Ŷ may be broadly interpreted as reading on latent representations, the stochastic latent variable zi represents an aggregation of latent representations because it is constructed during training (the specified time horizon prior to the current point in time t) using sets of past and future trajectories Xi and Yi. In the interest of compact prosecution, the examiner notes that amending the claim to include further description of the term “latent representation” in line with the specification the means used to generate it or some other defining characteristic would narrow the term to at least overcome the art of record. Applicant adds on p. 10 of the Remarks that because the latent variable zi is not predicted, Choi does not read on “determining predictions for the latent representations of the scenario at future points in time . . . .” (Emphasis added). However, this argument is moot because, as discussed above, one of ordinary skill in the art is not prevented from interpreting the vector representations of trajectories taught by Choi as reading on the latent representations of the present claims. Thus, generating predicted trajectories as discussed in [0032] and [0057] of Choi by combining recent past trajectories and samples of zi also reads on the present claims so long as the term “latent representations” remains open to broader interpretation. Therefore, the rejections of claims 1-4 and 14-15 under 35 U.S.C. § 102 stand. The 35 U.S.C. § 103 Rejections Applicant contends on pp. 10-11 that the various rejections of claims 5-9 and 16-19, dependent on claim 1, as being obvious over other prior art should be withdrawn because claim 1 is patentable over Choi. However, because claim 1 is not patentable over Choi as discussed above, and no further argument as to the patentability of these claims has been presented, the rejections of claims 5-9 and 16-19 stand. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4 and 14-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 20180124423 A1 to Choi, Wongun et al. (“Choi”). Regarding claim 1, Choi discloses a method for predicting a future state and/or behavior of a scenario whose further development is correlated with one or more observable variables, without directly and unambiguously arising from the observable variables, comprising the following steps of: processing measured observations of the observable variables at a current point in time t using an encoder to form latent representations of the scenario (Choi FIG. 4, [0020]: “Since there can be multiple plausible futures given the same inputs (including images I and past trajectories X), block 202 generates a diverse set of prediction samples Ŷ to provide accurate prediction of future trajectories 106.” Observations taken as individual images and points in past trajectories at a point in time. Understood that at test time (taken as current point in time t), the past trajectories X and images I are those observed. The entire observed trajectory X comprising the points taken as a latent representation.); processing the latent representations using a specified processing function to form processing products (Choi [0032]: “At test time, block 304 does not have access to encodings of future trajectories, so the encodings of past trajectories . . . drawn from recent data, are combined with multiple random samples of latent variable z.sub.i drawn from the prior P.sub.v(z.sub.i) . . . ” Combinations of recent past trajectories and samples of z.sub.i taken as processing products.); determining predictions for the latent representations of the scenario at future points in time τ using a context predictor based on at least the processing products as the prediction of the future state and/or behavior (Choi [0057]: “ . . . the prediction sample module 710 generates sets of such predictions . . . ”, [0032]: Sets of trajectory predictions passed to block 204 taken as predictions for the latent representations that are based on the processing products (combinations of recent past trajectories and samples of z.sub.i).); and controlling a vehicle and/or a robot and/or a driving assistance system and/or a system for monitoring regions, based on the determined predictions (Choi [0058]: “For example, a large number of agents being detected at a crosswalk, with likely trajectories of crossing the street, may trigger a change in a lighting system's pattern to provide a “walk” signal to those agents.”); wherein the specified processing function is configured to aggregate the latent representations from a specified time horizon prior to the current point in time t to form the processing product (Choi [0027]-[0028]: z.sub.i distribution is created based on the training past and future trajectories, taken as aggregation of latent representations.). Regarding claim 2, Choi discloses the method according to claim 1, wherein the processing function is additionally configured to include predictions from the specified time horizon in the formation of the processing product (Choi FIG. [0027]: “During training, block 302 learns Q.sub.ϕ(z.sub.i|Y.sub.i,X.sub.i) such that the recognition network gives higher probability to values of z.sub.i that are likely to produce a reconstruction Ŷ.sub.i that is close to actual predictions given the full context of training data for a given X.sub.i and Y.sub.i.” The reconstruction is at least included in the training of the z distribution (the specified time horizon), which is later used during the test phase (the current point in time).). Regarding claim 3, Choi discloses the method according to claim 1, wherein the context predictor is additionally configured to include further data present at the current point in time t in the formation of the predictions for the latent representations (Choi [0028]-[0032]: At training time, taken as the point in time, the context predictor (RNN decoder 416) outputs a plurality of trajectory predictions while also including the further data of known Y (future) trajectories.). Regarding claim 4, Choi discloses the method according to claim 1, wherein predictions for observations of the observable variables at the further points in time τ are reconstructed from the predictions for the latent representations of the scenario, as a further part of the prediction of the future state and/or behavior (Choi [0057]: “ . . . the prediction sample module 710 generates sets of such predictions for the ranking/refinement module 712 to work with, ultimately producing one or more predictions that represent the most likely future trajectories for agents . . . ”, Predicted future trajectories output by block 204 taken as a reconstruction of the predictions for the latent representations. As discussed above, they comprise trajectories at points in time in the future. Thus, the input X and the output future trajectories comprise trajectories at points in time, meaning that the final output of future trajectories are also observations of observable variables, just at points in time in the future.). Claim 14 is rejected over similar reasons to claim 1, applied to a non-transitory machine-readable data carrier on which is stored one or more computer programs (Choi [0047]: “Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.”). Claim 15 is rejected over similar reasons to Claim 1, applied to one or more computers and/or compute instances (Choi [0047]: “Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5-9 are rejected under 35 U.S.C. 103 as being unpatentable over Choi, and further in view of US 20200192366 A1 to Levinson, Jesse (“Levinson”). Regarding claim 5, Choi teaches the method according to claim 4. Choi does not appear to expressly teach wherein the predictions for observations of the observable variables, and/or the predictions for the latent representations of the scenario, are checked for plausibility against later measured observations in temporal connection with the future points in time τ. However, Levinson teaches wherein the predictions for observations of the observable variables (Levinson [0068]: Predicted trajectories at range of times (understood as a set of points in time) taken as predictions for observations of the variables.), and/or the predictions for the latent representations of the scenario (Prediction for the whole trajectory (as represented by the range of times) taken as predictions for latent representations), are checked for plausibility against later measured observations in temporal connection with the future points in time τ (Levinson [0068]: “The vehicle 102 can detect an event based at least in part on the actual behavior of the object differs by a threshold amount from the predicted behavior of the object. The difference may be quantified by comparing a predicted trajectory to an actual measured trajectory of the object for a point in time or a range of times.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have combined the system that predicts trajectories for mobile agents at points in time in the future taught by Choi with the system that compares predicted trajectories at points in time in future to the actual measured trajectory and checks the difference using a threshold to log an event taught by Levinson. Doing so would have “improve[d] the performance of the one or more detectors” by allowing the system to flag and log inconsistent data to be used to retrain the system, improving the performance of the system as suggested in [0025] of Levinson. Regarding claim 6, the above combination of Choi and Levinson teaches the method according to claim 5, wherein the plausibility check includes: processing the later measured observations using the encoder to form further latent representations (Understood that Choi teaches that measured observations (trajectories at point in time) input into block 202 become part of past trajectory X (taken as further latent representations).), and comparing the further latent representations with the predictions for the latent representations (Levinson [0068]: “The vehicle 102 can detect an event based at least in part on the actual behavior of the object differs by a threshold amount from the predicted behavior of the object. The difference may be quantified by comparing a predicted trajectory to an actual measured trajectory of the object for a point in time or a range of times.”). Regarding claim 7, the above combination of Choi and Levinson teaches the method according to claim 5, wherein the scenario is characterized by the movement of: road users or pedestrians (Choi [0018]: “Referring now to FIG. 1, an exemplary scene 100 is shown. The scene 100 depicts an intersection that is being monitored and includes a number of agents 102, which in this case may include both pedestrians and automobiles.”) or animals or other autonomous agents (Use of the term “or” requires consideration of only one of the listed options.). Regarding claim 8, the above combination of Choi and Levinson teaches the method according to claim 7, wherein at least one trajectory of an autonomous agent of the scenario, and/or a space occupied by at least one autonomous agent in the scenario (Use of the term “and/or” requires consideration of only one of the listed options. In the interest of compact prosecution, the Examiner notes that the BRI of a trajectory encompasses a space occupied over time.), as a function of time, is evaluated from the determined prediction of the future state and/or behavior (Choi [0057]: “ . . . the prediction sample module 710 generates sets of such predictions for the ranking/refinement module 712 to work with, ultimately producing one or more predictions that represent the most likely future trajectories for agents . . . ” Choi teaches at [0032] that future trajectories comprise trajectories at future points in time, understood as being a function of time.). Regarding claim 9, the above combination of Choi and Levinson teaches the method according to claim 8, wherein: a control signal is formed: from the determined prediction of the future state and/or behavior, and/or from a result of the plausibility check (Use of the term “and/or” requires consideration of only one of the listed options.), and/or from the evaluated at least one trajectory (Choi [0058]: “A response module 716 provides manual or automated actions responsive to the determined trajectories, where a human operator can trigger a response through the user interface 714 or a response can be triggered automatically in response to the trajectories matching certain conditions.”), and/or from the evaluated occupied space (Use of the term “and/or” requires consideration of only one of the listed options.); and the vehicle and/or the robot and/or the driving assistance system (Use of the term “and/or” requires consideration of only one of the listed options.) and/or a system for monitoring regions, is controlled with the control signal (Choi [0058]: “For example, a large number of agents being detected at a crosswalk, with likely trajectories of crossing the street, may trigger a change in a lighting system's pattern to provide a “walk” signal to those agents.”). Claims 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Choi as applied to claim 1 above, and further in view of US 20210191395 A1 to Gao, Jiyang et al. (“Gao”). Regarding claim 16, Choi teaches the method according to claim 1, further comprising: prior to using the context encoder, the processing function, and/or the context predictor for the determining of the predictions, training the context encoder (Use of the term “and/or” requires consideration of only one of the listed options.), the processing function, and/or the context predictor (Choi [0027]: “During training, block 302 learns Q.sub.ϕ(z.sub.i|Y.sub.i,X.sub.i) such that the recognition network gives higher probability to values of z.sub.i that are likely to produce a reconstruction Ŷ.sub.i that is close to actual predictions given the full context of training data for a given X.sub.i and Y.sub.i.”), wherein the training includes: providing measured observations Ot of observable variables at training points in time Pt within a specified measurement time horizon M, wherein Pt <M (Choi [0023], [0028]: Training observations X and Y disclosed as comprising observable variables at points in time, δ taken as time horizon M.). While Choi teaches at [0027] training the context encoder to produce a predicted trajectory close to the ground-truth trajectory given a historical trajectory, it does not appear to expressly teach: based on a first subset of the measured observations Ot at a subset of the training points in time Pt that are within a specified test time horizon having an endpoint T such that Pt ≤ T and T < M, determining a test prediction of a future state and/or behavior of a training scenario; assessing, using a specified cost function, how well the test prediction of the future state and/or behavior of the training scenario, and/or at least one subsequent result determined from the prediction of the future state and/or behavior of the training scenario, aligns with a second subset of the measured observations Ot at a subset of the training points in time Pt that are within an evaluation time horizon following the test time horizon, wherein T < Pt ≤ M; and optimizing parameters of the context encoder, the processing function, and/or the context predictor to improve the assessment by the cost function as additional test predictions continue to be determined. However, Gao teaches based on a first subset of the measured observations Ot at a subset of the training points in time Pt that are within a specified test time horizon having an endpoint T such that Pt ≤ T and T < M (Gao [0117]: “The system receives a plurality of training examples, each training examples having input data characterizing one or more vehicles in an environment and corresponding vehicle intent information (502).” Input data taken as first subset- Gao teaches input data includes trajectory measurements at past time points.), determining a test prediction of a future state and/or behavior of a training scenario (Gao [0119]: “The system can use a plurality of intent-specific neural networks . . . can generate an output for a corresponding intent . . . Each intent prediction can include . . . a predicted trajectory that would be followed by the vehicle in a future time period.” Gao teaches intent predictions include trajectory predictions for multiple time steps in the future.); assessing, using a specified cost function, how well the test prediction of the future state and/or behavior of the training scenario, and/or at least one subsequent result determined from the prediction of the future state and/or behavior of the training scenario, aligns with a second subset of the measured observations Ot at a subset of the training points in time Pt that are within an evaluation time horizon following the test time horizon, wherein T < Pt ≤ M (Gao [0120]: “The system can compare the intent prediction to the labels in the training examples. The system can calculate a loss which can measure the difference between the intent prediction and the labels in the training example. The loss can include: . . . a trajectory regression loss, e.g., smooth L1 loss between the predicted coordinates and the labeled coordinates at a series of future time steps.”); and optimizing parameters of the context encoder, the processing function, and/or the context predictor to improve the assessment by the cost function as additional test predictions continue to be determined. (Gao [0123]: “The system can generate updated model parameter values based on a loss by using an appropriate updating technique . . ..”, FIG. 1: Gao depicts a cyclical training method where training examples are fed into a training neural network subsystem, predictions are made leading to updated model parameters, which are used in the next round of receiving training examples. Taken as improving the cost function assessment as (each time) test predictions are determined.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to have combined the method of training a system that generates future trajectories based on previous observational data taught by Choi with the method of training a system that generates future trajectories based on previous observational data via a loss function which measures the difference between a predicted and known trajectory taught by Gao. Doing so would have improved the accuracy of the trajectory prediction by quantifying the error between the prediction and the truth, allowing it to be used as feedback in training the model as taught in Gao. Regarding claim 17, the above combination of Choi and Gao teaches the method according to claim 16, wherein the determination of the test prediction is performed, based on the measured observations at the training points in time, by the formation of latent representations from the measured observations of the training point in time using the encoder, the formation of processing products using the processing function, and the determination of the predictions using the context predictor (Gao [0120]: “The system can compare the intent prediction to the labels in the training examples. The system can calculate a loss which can measure the difference between the intent prediction and the labels in the training example. The loss can include: . . . a trajectory regression loss, e.g., smooth L1 loss between the predicted coordinates and the labeled coordinates at a series of future time steps.” Broadly interpreting the trajectory predictions made by block 202 of Choi as discussed in Claim 1 above, APOSITA would have understood that the test prediction to be tested in Gao would have been formed in the manner disclosed by block 202 of Choi as discussed in Claim 1 and repeated here in Claim 17.). Regarding claim 18, the above combination of Choi and Gao teaches the method according to claim 17, wherein: the second subset of the measured observations is processed using the context encoder to form the latent representations (Gao [0120]: “The loss can include . . . a trajectory regression loss, e.g., smooth L1 loss between the predicted coordinates and the labeled coordinates at a series of future time steps . . . ” Gao teaches at [0117] that the training examples contain future trajectories. Because this information is compared with the intent prediction (taken in this combination as the trajectory predictions of block 202, the latent representations), it follows that the labeled training data for future trajectories in the training examples is also latent representations.); and the cost function measures distances between (a) the latent representations and (b) the predictions for the latent representations (Gao [0120]: See above citation. By measuring the distances between the predicted and actual observations, Gao inherently teaches measuring distance between the predicted and actual whole trajectory.). Regarding claim 19, the above combination of Choi and Gao teaches the method according to claim 16, wherein the cost function measures distances between (a) the measured observations and (b) the predictions for observations (Gao [0120]: “The loss can include . . . a trajectory regression loss, e.g., smooth L1 loss between the predicted coordinates and the labeled coordinates at a series of future time steps . . . ”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Liu, Jerry et al. US 20210152831 A1. Conditional Entropy Coding For Efficient Video Compression. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY RICHARD HINTON whose telephone number is (703)756-1051. The examiner can normally be reached Monday-Friday 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hunter Lonsberry can be reached at (571) 272-7298. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HENRY R HINTON/Examiner, Art Unit 3665 /HUNTER B LONSBERRY/Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Oct 31, 2025
Non-Final Rejection — §102, §103
Jan 14, 2026
Response Filed
Mar 16, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601599
SYSTEM AND METHOD FOR IMPROVING THE LINEAR FEATURE AT INTERSECTION LOCATION
2y 5m to grant Granted Apr 14, 2026
Patent 12566066
HYBRID INERTIAL/STELLAR NAVIGATION METHOD WITH HARMONIZATION PERFORMANCE INDICATOR
2y 5m to grant Granted Mar 03, 2026
Patent 12559914
EXCAVATOR MANAGEMENT SYSTEM, MOBILE TERMINAL FOR EXCAVATOR, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12523018
Management Apparatus and Management System for Work Machine
2y 5m to grant Granted Jan 13, 2026
Patent 12510897
RETURN NODE MAP
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+33.7%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 46 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month