Prosecution Insights
Last updated: April 19, 2026
Application No. 18/008,070

TESTING AND SIMULATION IN AUTONOMOUS DRIVING

Non-Final OA §103§112
Filed
Dec 02, 2022
Examiner
BONSHOCK, DENNIS G
Art Unit
3992
Tech Center
3900
Assignee
Five AI Limited
OA Round
1 (Non-Final)
43%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
44%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
33 granted / 77 resolved
-17.1% vs TC avg
Minimal +1% lift
Without
With
+0.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
28 currently pending
Career history
105
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
3.8%
-36.2% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This is a Non-Final Office Action of the instant application 18/008,070 (hereinafter the ‘070 application) filed on 12/2/2022. The ‘070 application claims priority under 35 U.S.C. § 371 to PCT EP2021/064938 filed 6/03/2021, as well as to UNITED KINGDOM GB2008354.9 (6/03/2020) EP20194498.0 (09/04/2020) UNITED KINGDOM GB2105836.7 (04/23/2021) UNITED KINGDOM GB2107876.1 (06/02/2021). A certified copy of each has been received and placed on the record. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are: the claims state “the simulated agent trace of at least generated in at least one of the driving scenarios…”, which doesn’t make grammatical sense. Appropriate correction is required to definitively establish what the claim limitation requires. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8, and 10-17 are rejected under 35 U.S.C. 103 as being unpatentable over “Adaptive Stress Testing for Autonomous Vehicles” by Mark Koren et al., hereinafter Koren and Siddiqui et al., Publication No. 2020/0353943, hereinafter Si. With regard to claim 1, which teaches “A computer-implemented method of evaluating the performance of a full or partial autonomous vehicle (AV) stack in simulation, the method comprising:” Koren teaches The performance is evaluated in simulation, see e.g. page 1898, column 2, paragraph 1: "We present a simulation framework for autonomous vehicles that interfaces with our AST implementation." Here AST refers to adaptive stress testing. See also figure 2. With regard to claim 1, additionally teaching “applying an optimization algorithm to a numerical performance function defined over a scenario space,” Koren applies deep reinforcement learning to find most-likely failure scenarios, see page 1898, column 1, paragraph 3: "Adaptive stress testing (AST) has been proposed as a practical approach to finding most-likely failure scenarios by using a Markov decision process (MDP) formulation." "This paper also proposes deep reinforcement learning (DRL) as an alternative solver for AST." Deep reinforcement learning is an optimization algorithm that optimizes a numerical performance function, see page 1898, column 2, paragraph 4: "The goal of an agent is to find a policy that specifies the action at [...] each state to maximize the expected utility." The utility corresponds to the numerical performance function and is defined over the state S, which represents the scenario space, see equation (1). Si teaches a method and system for using a driving scenario machine learning network and proving a simulated driving environment in which a user can evaluate the performance / scores of different scenarios to determine which require further evaluation (see paragraphs 3). Si specifically teaches scenario classification where certain driving scenarios are identified as interesting / exceptional as a collision or near collision was identified due to interaction with outside agents (see paragraphs 76-79). Si further supplements here by noting optimization through training machine learning network through a reinforcement learning method (similar to DRL in Koren), by processing the driving scenario data (including object trajectories) and goals by running “an iterative learning simulation until the goals are achieved or until the dynamic objects crash into one another. After the iterative learning simulation process is performed…the system 100 evaluates trajectories, and scores the trajectories based on a predetermined criteria” (see Si paragraph 64). With regard to claim 1, additionally teaching “wherein the numerical performance function quantifies the extent of success or failure of the AV stack as a numerical score,” In Koren the utility is used to find most-likely failure scenarios. It corresponds to the discounted sum of the reward, which is defined in equation (4). The reward is zero if the state is in the set E and negative elsewhere. The set E encodes the set of states of failure, see page 1899, column 1, paragraph 4: "The inputs to the problem are the pair (S, E), where S is a generative simulator that is treated as a black box and E is a subset of the state space where the event of interest (e.g. a collision) occurs." Hence, the numerical performance function quantifies the extent of success or failure of the AV stack as a numerical score. Further see Si supra. With regard to claim 1, additionally teaching “and the optimization algorithm searches the scenario space for a driving scenario in which the extent of failure of the AV stack is substantially maximized;” Koren teaches on Page 1898, column 2, paragraph 4: "The goal of an agent is to find a policy that specifies the action at [...] each state to maximize the expected utility." Further see Si supra. With regard to claim 1, additionally teaching “wherein the optimization algorithm evaluates multiple driving scenarios in the search space over multiple iterations,” Koren teaches Reinforcement learning iteratively which updates its policy by evaluating multiple driving scenarios in the search space over multiple iterations, see page 1899, column 2, paragraph 3: "We start with the solver, which samples environment actions and passes them to the simulator through the control functions INITIALIZE, STEP, and ISTERMINAL. The simulator applies these actions, updates its internal state, and outputs an indication whether an event in E occurred and the likelihood of the latest state transition. The reward function transforms the simulator outputs into a reward to be passed back to the solver. The solver completes the loop by using the reward to choose the next action." See also figure 1. Multiple driving scenarios are spanned by different number of pedestrians and initial conditions, see page 1900, column 2, paragraph 1: "We test with different numbers of pedestrians, as well as with different starting states." With regard to claim 1, additionally teaching “by running a simulation of each driving scenario in a simulator, in order to provide perception inputs to the AV stack,” Koren teaches an overview of the simulator which is used to run the simulation is shown in figure 2. It includes a sensor module to generate sensor data, sensing the pedestrians, see page 1900, column 1, paragraph 1: "The sensors receive the new participant states and output measurements augmented with the noise from the environment actions." The sensor data is provided to the autonomous vehicle stack, see figure 2. With regard to claim 1, additionally teaching “and thereby generate at least one simulated agent trace and a simulated ego trace reflecting autonomous decisions taken in the AV stack in response to the simulated perception inputs,” Koren teaches the evaluation of the reward in equation (4) is based on the position of the nearest pedestrian Pp and the position of the vehicle Pv. Since the reward is evaluated in every iteration, it is directly and unambiguously derivable that a simulated agent trace consisting of the sequences of pedestrian positions and a simulated ego trace consisting of the positions of the vehicle are generated. The positions of the autonomous vehicle are governed by the AV stack, in particular by Intelligent Driver Model which reacts to the pedestrians, see page 1901, column 2, paragraph 2: "The SUT is based on the Intelligent Driver Model [18]. [...] The IDM then tries to follow a safe distance behind the pedestrian based on their relative velocity" Hence, the simulated ego trace reflects the autonomous decisions taken in the AV stack in response to the simulated perception inputs. Si further supplements here by specifically teaching the evaluation of “Ego” trajectories as compared over time with “Obj” trajectories or intended paths (see paragraphs 55-60). With regard to claim 1, additionally teaching “wherein later iterations of the multiple iterations are guided by the results of previous iterations of the multiple iterations,” Koren teaches the policies which are applied in the later iterations depend on the previous iterations. Hence, the later iterations of the multiple iterations are considered to be guided by the results of the previous iterations of the multiple iterations. Si further supplements here by specifically teaching the above noted iterative process of using previous results in future iterations to improve optimization (see paragraphs 64 and 68-70). With regard to claim 1, additionally teaching “with the objective of finding the driving scenario for which the extent of failure of the AV stack is maximized”, Koren applies deep reinforcement learning to find most-likely failure scenarios, see page 1898, column 1, paragraph 3: "Adaptive stress testing (AST) has been proposed as a practical approach to finding most-likely failure scenarios by using a Markov decision process (MDP) formulation." and "This paper also proposes deep reinforcement learning (DRL) as an alternative solver for AST." It would be obvious to one of ordinary skill in the art to combine the teachings of Si with that of Koren as both pieces of art exist in the same art space and are aiming to help the same problem by using machine learning to evaluate autonomous deriving scenarios and further identifying unique scenarios more critical to evaluation and improvement of the autonomous (semi-autonomous) system. Claims 16 and 17 are rejected for similar reason to claim 1, as the references provide or both a system and medium for storing and processing the above noted method. With regard to claim 8, which teaches “wherein the optimization algorithm is gradient-based, wherein each iteration computes a gradient of the performance function and the later iterations are guided by the gradients computed in the earlier iterations”, Koren teaches optimization being gradient based where later iteration rely on the gradients computed in earlier iterations (see page 1899, paragraphs 2-3). With regard to claim 10, which teaches “wherein the gradient of the performance function is estimated numerically in each iteration”, Koren teaches optimization being gradient based for each batch where later iteration rely on the gradients computed in previous iterations (see page 1899, paragraphs 2-3). With regard to claim 11, which teaches “wherein each scenario in the scenario space is defined by a set of scenario description parameters to be inputted to the simulator, the simulated ego trace dependent on the scenario description parameters and the autonomous decisions taken in the AV stack”, Koren teaches the scenario being defined as a set of scenario parameters input into the simulator as for example velocity of the ith pedestrian, position of the ith pedestrian, ith pedestrian’s acceleration, etc. (see pages 1900-1901). With regard to claim 12, which teaches “wherein the performance function is an aggregation of multiple time-dependent numerical performance metrics used to evaluate the performance of the AV stack, the time-dependent numerical performance metrics selected in dependence on environmental information encoded in the description parameters or generated in the simulator”, Koren teaches the scenario being a time dependent numerical performance metric evaluating the velocity and position at particular time (see page 1899, paragraph 4). Si further appreciates the position over time consideration in its ego trace and object trace (see paragraphs 55-60). With regard to claim 13, which teaches “wherein the numerical performance function is defined over a continuous numerical range”, Koren evaluates performance over a continuous numerical range (see page 1901, paragraph 3). This would be expected with system such as that in Koren and Si as scenarios with unique situation are given more attention and consideration. These situation are often related to others locally but also distinct from other different specific situations. Further see paragraph 99 of Si which includes an example performance score formula. With regard to claim 14, which teaches “wherein the numerical performance function is a discontinuous function over the whole of scenario space, but locally continuous over localized regions of the scenario space, wherein the method comprises checking that each of the multiple scenarios is within a common one of the localized regions”, Koren evaluates performance in a discontinuous manner though continuous over localized numerical ranges (see page 1901, paragraph 3). This would be expected with system such as that in Koren and Si as scenarios with unique situation are given more attention and consideration. These situation are often related to others locally but also distinct from other different specific situations. Further see paragraph 99 of Si which includes an example performance score formula. With regard to claim 15, which teaches “wherein the numerical performance function is based on at least one of: distance between an ego agent and another agent, distance between an ego agent and an environmental element, comfort assessed in terms of acceleration along the ego trace, or a first or higher order time derivative of acceleration, progress”, Koren further bases its performance of scenarios on evaluated distance between the SUT and the nearest pedestrian (see page 1901 paragraph 2). Si teaches a threshold distance in which close proximity is identified between objects identifying a near collision state (see paragraphs 92 and 99). Claims 2-7, 9, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over “Adaptive Stress Testing for Autonomous Vehicles” by Mark Koren et al., hereinafter Koren and Siddiqui et al., Publication No. 2020/0353943, hereinafter Si, as per claim 1 above, in further view of Chan et al., Patent No. 12,013,693, hereinafter Chan. With regard to claims 2 and 18, which teach “wherein the later iterations are guided by the earlier iterations, in combination with a predetermined acceptable failure model, with the objective of finding a driving scenario for which (i) the extent of failure is maximized and (ii) failure is unacceptable according to the acceptable failure model, wherein any driving scenario having (iii) a greater extent of failure but (iv) on which failure is acceptable according to the acceptable failure model is excluded from the search”, Chan teaches a system for simulation of scenarios of an autonomous or semiautonomous vehicle (see column 5, lines 10-20 and column 15, lines 26-48), similar to that of Koren and Si, but further specifically teaches making a determination that in some scenarios accidents are unavoidable, that is a reasonably skilled human would not have been able to avoid the accident given maximum acceleration/deceleration, steering rates, environmental data, etc. (see column 16, lines 22-35). Chan gives examples of this unavoidable scenarios such as a blowout of a tire of an adjacent vehicle, swerving of another vehicle into the autonomous vehicle from very close, etc. When these scenarios are identified and failure is imminent, these scenarios are removed from further evaluation to preserve resources for other manageable action decision that can be mitigated (see column 17, lines 1-18). It would be obvious to one of ordinary skill in the art at the time to combine the teaching of Chan with that of Koren and Si as each of the pieces of art exist in the same art space and are aiming to solve the same problem by using machine learning to evaluate autonomous deriving scenarios and further identifying unique scenarios more critical to evaluation and improvement of the autonomous (semi-autonomous) system. With regard to claims 3 and 19, which teach “wherein the acceptable failure model is applied to the simulated ego trace and the simulated agent trace of at least generated in at least one of the driving scenarios, in order to determine whether failure on that driving scenario is acceptable or unacceptable”, (see claim 2 supra, rejection equally applicable here). Further the traces defined and outlined above are scenarios that are evaluated (at least initially) to see if they correspond to a special / unique designation and dealt with accordingly (given more attention if preventable / given less attention to relieve processing usage if unpreventable). With regard to claims 4 and 20, which teach “wherein the scenario space is defined by one or more scenario parameters, and the acceptable failure model excludes, from the search, predetermined values or combinations of values of the one or more scenario parameters”, (see claim 2 supra, rejection equally applicable here). When these scenarios are identified and failure is imminent, these scenarios are removed from further evaluation to preserve resources for other manageable action decision that can be mitigated (see column 17, lines 1-18). With regard to claim 5, which teaches “wherein a constrained optimization method is used with the objective of finding a driving scenario fulfilling (i) and (ii), wherein (ii) is formulated as a set of one or more hard and/or soft constraints on the constrained optimization of (i) ”, Si teaches constraints in its determination of maximum failure states such as “likelihood of the final collision” and “number of calls to the step function”, each of with weigh in on the determination (see page 1901 paragraph 4 through page 1902 paragraph 1). With regard to claim 6, which teaches “wherein the acceptable failure model comprises one or more statistics derived from real-world driving data, which are compared with corresponding statistic(s) of a driving scenario, in order to determine whether or not failure on that driving scenario is acceptable”, Chan further teaches making the determination that in some scenarios accidents are unavoidable, being based on statistics of if a reasonably skilled human would have been able to avoid the accident given maximum acceleration/deceleration, steering rates, environmental data, etc. (see column 16, lines 22-35). With regard to claim 7, which teaches “wherein the acceptable failure model comprises one or more acceptable failure rules applied to one or more blame assessment parameters extracted from the simulated traces”, (see claim 2 supra, rejection equally applicable here). Where it is determined that a reasonably skilled human would not have been able to avoid the accident given maximum acceleration/deceleration, steering rates, environmental data, etc. (see column 16, lines 22-35), given the collision condition (adjacent vehicle swerve into vehicle when near, blown tire, etc.). With regard to claim 9, which teaches “wherein a gradient-based constrained optimization method is used with the objective of finding a driving scenario fulfilling (i) and (ii), wherein (ii) is formulated as a set of one or more hard and/or soft constraints on the constrained optimization of (i), wherein each iteration computes a gradient of the performance function and the later iterations are guided by the gradients computed in the earlier iterations”, (see claim 2 supra, rejection equally applicable here). Koren teaches optimization being gradient based for each batch where later iteration rely on the gradients computed in previous iterations (see page 1899, paragraphs 2-3). Conclusion The prior are made of record and not relied upon is considered pertinent to applicant’s disclosure: Farabet et al. (US 2019/0303759), Ros Sanchez (US 10,671,077), Liu et al. (US 2021/0157882), Wachi (US 20210064915), Song et al. (US 10,768,629), Grau (US 2019/0050512), Li et al. (US 10,031,526), and Partridge et al. (US 2020/0250363). Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS G BONSHOCK whose telephone number is (571)272-4047. The examiner can normally be reached M-F 7:15 - 4:45. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Kosowski can be reached at (571) 272-3744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DENNIS G BONSHOCK/ Primary Examiner, Art Unit 3992
Read full office action

Prosecution Timeline

Dec 02, 2022
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent RE50696
SYSTEM AND METHOD FOR TRACKING WEB INTERACTIONS WITH REAL TIME ANALYTICS
2y 5m to grant Granted Dec 09, 2025
Patent RE50641
POWER SAVING TECHNIQUES IN COMPUTING DEVICES
2y 5m to grant Granted Oct 14, 2025
Patent RE50538
AUTOMATIC AVATAR CREATION
2y 5m to grant Granted Aug 19, 2025
Patent RE50272
REMOTE OPTICAL ENGINE FOR VIRTUAL REALITY OR AUGMENTED REALITY HEADSETS
2y 5m to grant Granted Jan 14, 2025
Patent RE50252
SCROLLING METHOD OF MOBILE TERMINAL AND APPARATUS FOR PERFORMING THE SAME
2y 5m to grant Granted Dec 31, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
43%
Grant Probability
44%
With Interview (+0.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month