Prosecution Insights
Last updated: April 19, 2026
Application No. 17/719,020

SYSTEMS AND METHODS FOR UNMANNED AERIAL VEHICLE SIMULATION TESTING

Final Rejection §101§102§103
Filed
Apr 12, 2022
Examiner
MORRIS, JOSEPH PATRICK
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Iris Automation Inc.
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
4 granted / 15 resolved
-28.3% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
34 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
30.9%
-9.1% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Claims 1-6, 8-16, and 18-20 are presented for examination. This Office Action is in response to submission of documents on December 1, 2025. Rejection of claims 7 and 17 are withdrawn as being moot because the claims have been cancelled. Rejection of claims 1-6, 8-16, and 18-20 under 35 U.S.C. 101 for being directed to unpatentable subject matter is maintained. Rejection of claims 1-6, 9-16, and 19-20 under 35 U.S.C. 102 as being anticipated by Venkatadri are maintained. Rejection of claims 8 and 18 under 35 U.S.C. 103 as being obvious over Venkatadri in view of Chau are maintained. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding the rejection of the claims under 35 U.S.C. 101 as being directed to unpatentable subject matter, Examiner is not persuaded for the following reasons: Applicant argues that “executing…a second simulation of the second test case according to the second simulation parameters” is a non-abstract idea. Response at pg. 8. Examiner disagrees. “Executing a simulation” is a step that is either a mathematical process (e.g., a model that simulates behavior of real world phenomena based on execution of mathematical functions that model the behavior) or a method of organizing human activity (e.g., a reenactment of a real world situation that can be performed solely using human intervention). Thus, the step of “executing, by the one or more processors, a second simulation of the second test case according to the second simulation parameters” is execution of an abstract idea using generic computing components (i.e., applying an exception, See MPEP 2106.05(f)). Applicant further argues that the claims “integrate any purported abstract idea into a practical application because the claimed technology provides a specific technical improvement to the functioning of training/simulation systems.” Response at pg. 8. In support of this argument, Applicant cites to the Specification, which discloses that “conventional (including manual) generation of test cases will often fail to anticipate certain conditions (e.g., failures, other predetermined conditions, etc.)…” Response at pg. 8, citing Specification at [0081]. However, the claims do not reflect the purported improvements that are disclosed in the Specification. “After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology. Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316, 120 USPQ2d 1353, 1359 (Fed. Cir. 2016) (patent owner argued that the claimed email filtering system improved technology by shrinking the protection gap and mooting the volume problem, but the court disagreed because the claims themselves did not have any limitations that addressed these issues). That is, the claim must include the components or steps of the invention that provide the improvement described in the specification.” MPEP 2106.05(a). Instead, as recited, the claims do not perform any steps that are other than what could be performed manually. Further, the claims do not address how the recited method identifies test cases that would otherwise be unanticipated by conventional methods. The claims recite a “target condition” but do not give detail as to what is considered a “target condition,” such as a system failure and/or unwanted behavior of the simulated phenomena. Thus, the claims, as currently presented, lack context as to what parameters are selected for the “second test case” other than those that have an associated “priority value” that meets a threshold. Finally, Applicant argues that “the claims recite ‘significantly more’ than any alleged abstract idea or generic computer implementation.” Again, Examiner is not persuaded. As rejected, the amended independent claims only include abstract ideas without any additional elements that would amount to “significantly more” than the recited judicial exceptions. The only elements that are not abstract ideas are the application of abstract ideas by generic computer components (e.g., “by the one or more processors”), which courts have found does not amount to “significantly more” than the recited judicial exceptions. See MPEP 2106.05(f), Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014), Gottschalk v. Benson, 409 U.S. 63, 70, 175 USPQ 673, 676 (1972), Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 112 USPQ2d 1750 (Fed. Cir. 2014); Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016). Accordingly, because the claims, as currently presented, recite only abstract ideas and generic computer components (which do not integrate the abstract ideas into a practical application nor amount to significantly more), the rejection of pending claims 1-6, 8-16, and 18-20 under 35 U.S.C. 101 is maintained. Regarding rejection of claims 1-6, 9-16, and 19-20 as being anticipated by Venkatadri, Examiner is not persuaded for the following reasons: During the interview between the Examiners and the Applicant, and as memorialized in the Interview Summary Record of October 31, 2025, Examiner suggested amendments to the claims that recite an aerial simulation to overcome the rejections. The currently presented claims do not include amendments directed to such a limitation. With regards to the proposed amendments, Examiner suggested that, if the Applicant were to submit the proposed amendments, distinction between the disclosure of the prior art and the claimed limitation would be necessary before a determination could be made as to whether the proposed amendments overcame the prior art. After reviewing the rejection, the prior art reference, and the arguments of the Applicant, Examiner does not agree that the submitted amendments overcome at least the Venkatadri reference. The claims, as currently presented, recite “a priority value” that is associated with each of the “simulation parameters.” However, the claim does not recite limitations as to what comprises a “priority value” nor a “a priority threshold.” Accordingly, when given a reasonable interpretation, the “priority value” and “priority threshold” are disclosed by Venkatadri: “For example, the second sampling rule can be configured to emphasize the parameters that caused and/or magnified the previous error in completing the braking maneuver (e.g., lowering an amount of time for the autonomous vehicle to complete the braking maneuver, adjusting road surface conditions, adding inclement weather, etc.).” Venkatadri at [0022]. Further, “In such fashion, a sampling rule can be or otherwise describe a method of sampling parameters (e.g., for inclusion in the plurality of testing parameters, etc.) that is configured to emphasize a certain aspect of an autonomous vehicle testing scenario.” Venkatadri at [0029]. “In some implementations, the second plurality of testing parameters can include fewer parameters than the first plurality of testing parameters. More particularly, by utilizing the optimization function to narrow the testing parameter search space, the second sampling rule can be more specifically focused on the source of error than the first sampling rule, and can therefore eliminate a number of extraneous or irrelevant testing parameters when sampling for the second plurality of testing parameters.” Venkatadri at [0035]. “After identifying the plurality of testing parameters associated with the performance metric, at least a subset of testing parameters from the plurality of testing parameters can be obtained.” Venkatadri at [0036]. Thus, “emphasiz[ing] the parameters that caused and/or magnified the previous error” and selecting parameters using a rule that “eliminate[s] a number of extraneous or irrelevant testing parameters” are examples of criteria that, when broadly interpreted, can reasonably be considered as a “priority value” and selecting parameters that meet a “priority threshold” of influence on an error. Accordingly, because Venkatari anticipates independent claims 1, and 11, Examiner maintains rejection of those claims under 35 U.S.C. 102(a)(2). Further, rejection of the remaining pending claims under 35 U.S.C. 102(a)(2) or 35 U.S.C. 103 are maintained at least because those claims depend from claim 1 or 11. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exceptions without significantly more. The claims recite mental processes and/or mathematical concepts. The recited judicial exceptions are not integrated into a practical application because the additional elements that are recited in the claims are extra-solution activities that do not integrate the judicial exceptions into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because courts have found that the steps of data gathering, data outputting, reciting generic computer components, and generally linking an exception to a particular field are not significantly more than a judicial exception. Claim 1 Step 1: The claim is directed to a process, falling under one of the four statutory categories of invention. Step 2A, Prong 1: The claim 1 limitations include (bolded for abstract idea identification): Claim 1 Mapping Under Step 2A Prong 1 A method of generating test cases for a simulator based on simulated events, comprising: monitoring, by one or more processors coupled to memory, an output from a first simulation of a first test case, the first simulation executed according to a plurality of first simulation parameters, each of the plurality of first simulation parameters associated with a respective priority value; detecting, by the one or more processors, based on the output from the first simulation, a target condition resulting from the first test case; selecting, by the one or more processors, a subset of the plurality of first simulation parameters of the first test case associated with the target condition, the respective priority value of each simulation parameter of the subset satisfying a priority threshold; generating, by the one or more processors, a second test case having second simulation parameters based on modifying the subset of the plurality of first simulation parameters of the first test case; and executing, by the one or more processors, a second simulation of the second test case according to the second simulation parameters. Abstract Idea: Mental Process Generating a test case is a mental process that can be performed in the human mind. For example, a human can determine one or more conditions for a test case, such as velocity, vehicle direction, and/or weather conditions, and set one or more parameters based on the conditions. See e.g., MPEP 2106.04(a)(2), Subsection III. Abstract Idea: Mental Process Monitoring output of a simulator can be performed by a human by reviewing logs indicating events that occur during the simulation. See e.g., MPEP 2106.04(a)(2), Subsection III. See also Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016): “A wide-area real-time performance monitoring system for monitoring and assessing dynamic stability of an electric power grid.” Abstract Idea: Mental Process Detecting a condition from output of a simulation is a mental process that requires observation, evaluation, opinion, and judgment to determine whether the output indicates the specified target condition. See e.g., MPEP 2106.04(a)(2), Subsection III. Abstract Idea: Mental Process Selecting parameters based on criteria is a mental process that requires observation, evaluation, judgment, and opinion. The process can be performed by a human using pencil and paper and/or in the human mind. See MPEP 2106.04(a)(2), Subsection III. Abstract Idea: Mental Process Generating a test case is a mental process that can be performed in the human mind. For example, a human can determine one or more conditions for a test case, such as velocity, vehicle direction, and/or weather conditions, and set one or more parameters based on the conditions. See e.g., MPEP 2106.04(a)(2), Subsection III. Further, modifying parameters can include changing a parameter to a different unique value. Abstract Idea: Mathematical Concept Executing a simulation includes performing one or more mathematical functions to model the behavior of real events in order to make predictions and/or observations of the behavior. See MPEP 2106.04(a)(2), Subsection I. Step 2A, Prong 2: Claims 1 recites “one or more processors coupled to memory” as the only additional element in the claim. Reciting generic computer components is the additional element of instructions to apply the recited judicial exception, which courts have found does not integrate the judicial exception into a practical application. See MPEP 2106.05(f), Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014), Gottschalk v. Benson, 409 U.S. 63, 70, 175 USPQ 673, 676 (1972), Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 112 USPQ2d 1750 (Fed. Cir. 2014); Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016). Step 2B: Regarding Step 2B, the inquiry is whether any of the additional elements (i.e., the elements that are not the judicial exception) amount to significantly more than the recited judicial exception. As indicated at Step 2A, Prong 2, the only additional element is generic computer components, which courts have found does not amount to significantly more than the recited judicial exceptions. Reciting generic computer components is the additional element of instructions to apply the recited judicial exception, which courts have found does not integrate the judicial exception into a practical application. See MPEP 2106.05(f), Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014), Gottschalk v. Benson, 409 U.S. 63, 70, 175 USPQ 673, 676 (1972), Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 112 USPQ2d 1750 (Fed. Cir. 2014); Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016). Accordingly, claim 1 is rejected for being directed to unpatentable subject matter. Claim 2 Claim 2 recites: wherein the first test case is selected from a plurality of first test cases having simulation parameters within a first parameter range; Selecting a test case from a selection of test cases is a mental process that can be performed in the human mind. For example, a human can identify test cases that have a parameter within a specified range and choose one of the test cases as the first test case. See MPEP 2106.04(a)(2), Subsection III. wherein generating the second test case comprises generating a plurality of second test cases having a simulation parameter within a second parameter range determined based on narrowing the first parameter range; and Generating a test case is a mental process that can be performed in the human mind. For example, a human can determine one or more conditions for a test case, such as velocity, vehicle direction, and/or weather conditions, and set one or more parameters based on the conditions and the values of the parameters from the first case. See e.g., MPEP 2106.04(a)(2), Subsection III. wherein generating the second test case comprises selecting the second test case from the plurality of second test cases. Selecting a test case from a selection of test cases is a mental process that can be performed in the human mind. For example, a human can identify test cases that have a parameter within a specified range and choose one of the test cases as the second test case. See MPEP 2106.04(a)(2), Subsection III. Accordingly, claim 2 is rejected for being directed to unpatentable subject matter. Claim 3 Claim 3 recites wherein the simulation parameter within the second parameter range is selected for each of the plurality of second test cases based on the target condition resulting from the first test case. The claim merely further specifies conditions for selecting a parameter, which can be performed in the human mind, such as by observing the target condition and, based on evaluating the condition, selecting a parameter value. Accordingly, claim 3 is rejected for being directed to unpatentable subject matter. Claim 4 Claim 4 recites determining the second parameter range based on the first parameter range and a rate of the target condition. The claim merely further specifies conditions for selecting a parameter, which can be performed in the human mind, such as by observing the target condition and, based on evaluating the condition, selecting a parameter value. Accordingly, claim 4 is rejected for being directed to unpatentable subject matter. Claim 5 Claim 5 recites wherein the second test case is stochastically sampled from the plurality of second test cases. Stochastically sampling is a mathematical concept that requires performance of one or more functions, such as a random number generator, to select a test case according to a distribution. Accordingly, claim 5 is rejected for being directed to unpatentable subject matter. Claim 6 Claim 6 recites wherein identifying the subset of the plurality of first simulation parameters of the first test case comprises determining a simulation time at which the target condition occurred; wherein identifying the subset of the plurality of first simulation parameters of the first test case comprises identifying one or more conditions of the first test case that occurred prior to the simulation time at which the target condition occurred; and wherein identifying the subset of the plurality of first simulation parameters of the first test case comprises extracting the first simulation parameters based on the one or more conditions of the first test case. Identifying and extracting values are the extra-solution activity of data gathering, which courts have found does not integrate the judicial exception into a practical application and further, is not significantly more than the recited exception. See, e.g., In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989); In re Meyers, 688 F.2d 789, 794; 215 USPQ 193, 196-97 (CCPA 1982); OIP Technologies, 788 F.3d at 1363, 115 USPQ2d at 1092-93; CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011). Accordingly, claim 6 is rejected for being directed to unpatentable subject matter. Claim 7 Claim 7 recites wherein each of the first simulation parameters are associated with a priority value; and wherein extracting the first simulation parameters comprises extracting a subset of the first simulation parameters having a respective priority value that satisfies a threshold. Extracting values is the extra-solution activity of data gathering, which courts have found does not integrate the judicial exception into a practical application and further, is not significantly more than the recited exception. See, e.g., In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989); In re Meyers, 688 F.2d 789, 794; 215 USPQ 193, 196-97 (CCPA 1982); OIP Technologies, 788 F.3d at 1363, 115 USPQ2d at 1092-93; CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011). Accordingly, claim 7 is rejected for being directed to unpatentable subject matter. Claim 8 Claim 8 recites wherein the plurality of first simulation parameters and the second simulation parameters comprise at least one of a velocity value, an altitude value, a location value, a cloud cover value, a cloud type value, a roll value, a pitch value, a yaw value, environmental lighting conditions, or environmental objects. The claim merely limits the application of the judicial exception to a particular field; namely, aviation simulations. Limiting a judicial exception to a particular field of use is an additional elements that does not integrate the abstract idea into a practical application. Further, limitations that indicate a particular field of use are insignificantly more than the recited judicial exception. See MPEP 2106.05(h); Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981); Bilski v. Kappos, 561 U.S. 593, 612, 95 USPQ2d 1001, 1010 (2010); Affinity Labs of Texas v. DirecTV, LLC, 838 F.3d 1253, 120 USPQ2d 1201 (Fed. Cir. 2016); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie Indem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags). According, claim 8 is rejected for being directed to unpatentable subject matter. Claim 9 Claim 9 recites wherein monitoring the output from the first simulation of the first test case comprises providing, to the first simulation, one or more events of the first test case at predetermined time intervals; and wherein monitoring the output from the first simulation of the first test case comprises receiving, from the first simulation, feedback information generated in response to the one or more events of the first test case as the output from the first simulation. Providing and receiving data are extra-solution activities that courts have found do not integrate a judicial exception into a practical application and further do not amount to significantly more than the judicial exception. See Intellectual Ventures I v. Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Accordingly, claim 9 is directed to unpatentable subject matter. Claim 10 Claim 10 recites wherein detecting the target condition resulting from the first simulation comprises determining a difference between the output from the first simulation and an expected output value of the first test case; and wherein detecting the target condition resulting from the first simulation comprises detecting the target condition responsive to the difference exceeding a predetermined threshold. The claim further specifies the “detecting” step of the independent claim, which is either a mental process (assuming the “difference between the output from the first simulation and an expected output value” can be performed in the human mind) or a mathematical concept (if the complexity of the “difference” calculation requires more than elementary mathematical calculations). Accordingly, claim 10 is directed to unpatentable subject matter. Claim 11 Claim 11 recites: A system configured for generating test and train cases for a simulator based on simulated events, the system comprising: one or more processors coupled to memory, the one or more processors configured to: Reciting generic computer components is the additional element of instructions to apply the recited judicial exception, which courts have found does not integrate the judicial exception into a practical application. See MPEP 2106.05(f), Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014), Gottschalk v. Benson, 409 U.S. 63, 70, 175 USPQ 673, 676 (1972), Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 112 USPQ2d 1750 (Fed. Cir. 2014); Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016). The system is configured to perform steps substantially the same as the steps of the method recited in claim 1. Accordingly, for at least the same reasons as provided for claim 1, the remainder of claim 11 is rejected under 35 U.S.C. 101. Claims 12-20 Claims 12-20 recite substantially the same imitations as claims 2-10. Accordingly, for at least the same reasons as claims 2-10, claims 12-20 are rejected under 35 U.S.C. 101 for being directed to unpatentable subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-7, 9-17, and 19-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Venkatadri, et al., (U.S. Patent Publication No. 2022/0197280, hereinafter “Venkatadri”). Claim 1 Venkatadri discloses: A method of generating test cases for a simulator based on simulated events, comprising: Once the first plurality of testing parameters 302A is obtained by the sampling module 302 based on the first sampling rule 302B , the autonomous vehicle testing scenario 302A can be simulated using the first plurality of testing parameters 304 using the simulation module 306 to obtain a first scenario output 308. Venkatadri at [0098]. The “testing scenario” is analogous to the “test case.” monitoring, by one or more processors coupled to memory, an output from a first simulation of a first test case, the first simulation executed according to a plurality of first simulation parameters, The first scenario output 308 can describe an overall pass/fail state for the autonomous vehicle testing scenario 304. Venkatadri at [0098]. An evaluation module 310 can evaluate an optimization function 311 to obtain simulation error data 312. The optimization function 311 itself can evaluate (and/or can be configured to evaluate) the first scenario output 308. Venkatadri at [0100]. The “first scenario” is analogous to the “first test case.” The “testing parameters” used in the “first scenario” are analogous to the “first simulation parameters.” each of the plurality of first simulation parameters associated with a respective priority value; For example, the second sampling rule can be configured to emphasize the parameters that caused and/or magnified the previous error in completing the braking maneuver (e.g., lowering an amount of time for the autonomous vehicle to complete the braking maneuver, adjusting road surface conditions, adding inclement weather, etc.). Venkatadri at [0022]. The level at which a testing parameter “caused and/or magnified the previous error” is analogous to a “priority value” because both are related to the importance of a particular parameter in the results of a simulation. detecting, by the one or more processors, based on the output from the first simulation, a target condition resulting from the first test case; More particularly, the optimization function (e.g., an error loss function, etc.) can evaluate the differences between the first scenario output 308 (e.g., a pass/fail state, performance values for the plurality of performance metrics, etc.) and ideal values and/or ranges of values for the plurality of performance metrics 302C to obtain the simulation error data 312. The first scenario output 308 can correspond to a performance metric of the plurality of performance metrics 302C. Venkatadri at [0100]. The “performance metric” is analogous to the “target condition.” selecting, by the one or more processors, a subset of the plurality of first simulation parameters of the first test case associated with the target condition, the respective priority value of each simulation parameter of the subset satisfying a priority threshold; In some implementations, the second plurality of testing parameters can include fewer parameters than the first plurality of testing parameters. More particularly, by utilizing the optimization function to narrow the testing parameter search space, the second sampling rule can be more specifically focused on the source of error than the first sampling rule, and can therefore eliminate a number of extraneous or irrelevant testing parameters when sampling for the second plurality of testing parameters. Venkatadri at [0035]. The “second plurality of testing parameters” is analogous to the “subset of the plurality of first simulation parameters.” The subset is selected based on the “optimization function,” which determines, for each parameter, the contribution of that parameter to an error. generating, by the one or more processors, a second test case having second simulation parameters based on modifying the subset of the plurality of first simulation parameters of the first test case; and The second sampling rule 316 can be configured to emphasize the identified performance metric. More particularly, the second sampling rule 316 can be configured to select parameters that will increase the error associated with the performance metric of the plurality of performance metrics 302C. Venkatadri at [0102]. executing, by the one or more processors, a second simulation of the second test case according to the second simulation parameters. In some implementations, the autonomous vehicle testing scenario can be simulated using the second plurality of testing parameters to obtain a second scenario output. Venkatadri at [0141]. Claim 2 Venkatadri discloses: wherein the first test case is selected from a plurality of first test cases having simulation parameters within a first parameter range; More particularly, in some implementations, scenarios can be organized within scenario families based on certain shared characteristics or commonalities (e.g., required maneuver(s), required operation(s) (e.g., perception, prediction, etc.), shared environmental characteristic(s) (e.g., adverse weather conditions, night time, day time, etc.), shared performance metric(s) (e.g., perception in response to occlusion, etc.), etc.). Venkatadri at [0028]. A “scenario family” is analogous to a “plurality of first test cases.” FIG. 3 depicts an example data flow diagram for determination of a second sampling rule using an optimization function according to example embodiments of the present disclosure. More particularly, a first plurality of testing parameters 304 can be obtained using sampling module 302 based at least in part on a first sampling rule 302B. The first plurality of testing parameters 304 can be associated with an autonomous vehicle testing scenario 302A. The autonomous vehicle testing scenario 302A can be or otherwise represent a scenario in which an autonomous vehicle can operate. As an example, the autonomous vehicle testing scenario 302A may represent a series of maneuvers that an autonomous vehicle must perform to navigate a specific series of roads. As another example, the autonomous vehicle testing scenario 302A may represent one or more maneuvers in response to action(s) taken by a separate testing entity(s) (e.g., additional vehicle(s), one or more pedestrian(s), a road obstruction, a traffic pattern, etc.). More generally, it should be noted that the autonomous vehicle testing scenario 302A can broadly represent any scenario that the autonomous vehicle could encounter. Venkatadri at [0093]. wherein generating the second test case comprises generating a plurality of second test cases having a simulation parameter within a second parameter range determined based on narrowing the first parameter range; and A second plurality of testing parameters can be obtained for the autonomous vehicle according to the second sampling rule 316 using the sampling module 302. In some implementations, the second plurality of testing parameters can include fewer parameters than the first plurality of testing parameters 304. More particularly, by utilizing the optimization function 311 to narrow the testing parameter search space, the second sampling rule 316 can be more specifically focused on the source of error than the first sampling rule 302B, and can therefore eliminate a number of extraneous or irrelevant testing parameters when sampling for the second plurality of testing parameters. Venkatadri at [0104]. “Narrow the testing parameter search space” is performed based on the previous scenario run of the simulation (the “first test case” with the “first parameter range”) wherein generating the second test case comprises selecting the second test case from the plurality of second test cases. As another example, a scenario family may be grouped based on a shared adverse weather condition. For example, a scenario family may include a first scenario in which a vehicle takes a right turn in icy road conditions, and a second scenario in which the vehicle takes a left turn against oncoming traffic in icy road conditions. As such, it should be broadly understood that a plurality of parameters can, in some implementations, describe the scenario itself, and that a sampling rule can additionally indicate whether a scenario should be sampled from within a scenario family or from a separate scenario family. Venkatadri at [0131]. The “separate scenario family” is a “plurality of second test cases.” Claim 3 Venkatadri discloses: wherein the simulation parameter within the second parameter range is selected for each of the plurality of second test cases based on the target condition resulting from the first test case. Based on the optimization function, the computing system can determine a second sampling rule associated with the performance metric. As an example, the first scenario output can indicate that the simulation of the autonomous vehicle testing scenario failed. Additionally, the first scenario output can identify errors (e.g., value(s) outside of an ideal, etc.) associated with a subset of the plurality of performance metrics. However, many of these errors may be of less importance than others, or may share a causal relationship with other errors (e.g., an error in behavior for a first performance metric may then cause an error in behavior for a second performance metric, etc.). As such, it is particularly important to identify the most influential, or important, errors among the subset of performance metrics. To do so, the optimization function can be evaluated to obtain the simulation error data that corresponds to the performance metric of particular importance from the subset of performance metrics. Based on the optimization function, a second sampling rule can be determined that is configured to emphasize the identified performance metric. More particularly, the second sampling rule can be configured to select parameters that will increase the error associated with the performance metric. For example, if the first sampling rule generated a minor error associated with the parameter (e.g., only slightly outside the range of ideal behavior, etc.), the second sampling rule can be configured to generate a greater error. In such fashion, the optimization function can be used to determine a second sampling rule that narrows the “search space” among the plurality of performance metrics, therefore facilitating identification of the source of the error associated with the performance metric. Venkatadri at [0034]. The “second sampling rule” is a selection of the second parameters and is generated based on the error of the “performance metric,” analogous to the “target condition.” Claim 4 Venkatadri discloses: determining the second parameter range based on the first parameter range and A second plurality of testing parameters can be obtained for the autonomous vehicle according to the second sampling rule 316 using the sampling module 302. In some implementations, the second plurality of testing parameters can include fewer parameters than the first plurality of testing parameters 304. More particularly, by utilizing the optimization function 311 to narrow the testing parameter search space, the second sampling rule 316 can be more specifically focused on the source of error than the first sampling rule 302B, and can therefore eliminate a number of extraneous or irrelevant testing parameters when sampling for the second plurality of testing parameters. Venkatadri at [0104]. a rate of the target condition. In some implementations, the performance metric data 500 may include a speed compliance metric 504. The speed compliance metric 504 may indicate, quantify, or otherwise describe ideal behavior for an aspect of autonomous vehicle performance in the simulation of the autonomous vehicle testing scenario. As an example, the speed compliance metric 504 may indicate an ideal degree of speed variation within a certain range of time. As another example, the speed compliance metric 504 may indicate a maximum number of times that the speed of the autonomous vehicle may deviate from an ideal speed range. Venkatadri at [0116]. “Speed compliance metric” is a rate. Claim 5 Venkatadri discloses: wherein the second test case is stochastically sampled from the plurality of second test cases. Additionally, or alternatively, in some implementations, the scenario type can be sampled from within a scenario family or from a separate scenario family. Venkatadri at [0130]. Claim 6 Venkatadri discloses: wherein identifying the subset of the plurality of first simulation parameters of the first test case comprises determining a simulation time at which the target condition occurred; For example, the parameters may describe an actor located at the sidewalk of an intersection, and who's behavior includes facing the intersection and walking into the intersection at predetermined time. As such, the parameters included in the first plurality of testing parameters can broadly describe any possible detail or characteristic of the autonomous vehicle testing scenario. Venkatadri at [0028]. To do so, the first sampling rule can indicate a sampling of parameters that are most likely to cause an error for the “entering intersection” performance metric. Venkatadri at [0029]. The “entering intersection performance metric” is a target condition and occurs at the “predetermined time.” wherein identifying the subset of the plurality of first simulation parameters of the first test case comprises identifying one or more conditions of the first test case that occurred prior to the simulation time at which the target condition occurred; and To do so, the first sampling rule can indicate a sampling of parameters that are most likely to cause an error for the “entering intersection” performance metric (e.g., adding a number of actors to the intersection, decreasing compliance for actors in the intersection, adding pedestrian actors to the intersection, reducing visibility, increasing occlusion of the vehicle, increasing adverse weather conditions, etc.). Venkatadri at [0029]. wherein identifying the subset of the plurality of first simulation parameters of the first test case comprises extracting the first simulation parameters based on the one or more conditions of the first test case. As yet another example, the parameters may describe a location, pose, type and/or behavior for each of one or testing entities included in the scenario (e.g., specifying that a testing entity is riding a bicycle and is not compliant with road rules, etc.). Venkatadri at [0028]. Claim 7 Venkatadri discloses: wherein each of the first simulation parameters are associated with a priority value; and Additionally, the first scenario output 308 can identify errors (e.g., value(s) outside of an ideal, etc.) associated with a subset of the plurality of performance metrics 302C. However, many of these errors may be of less importance than others, or may share a causal relationship with other errors (e.g., an error in behavior for a first performance metric may then cause an error in behavior for a second performance metric, etc.). As such, it is particularly important to identify the most influential, or important, errors among the subset of performance metrics 302C. To do so, the optimization function 311 can be evaluated to obtain the simulation error data 312 that corresponds to the performance metric of particular importance from the subset of performance metrics 302C. Based on the optimization function 311, a second sampling rule 316 can be determined using the sampling rule generation module 314. Venkatadri at [0102]. “The most influential, or important, errors among the subset of performance metrics” are those with a high priority. wherein extracting the first simulation parameters comprises extracting a subset of the first simulation parameters having a respective priority value that satisfies a threshold. The second sampling rule 316 can be configured to emphasize the identified performance metric. More particularly, the second sampling rule 316 can be configured to select parameters that will increase the error associated with the performance metric of the plurality of performance metrics 302C. Venkatadri at [0102]. Parameters that contribute most to an error are selected to be sampled in subsequent test runs. Claim 9 Venkatadri discloses: wherein monitoring the output from the first simulation of the first test case comprises providing, to the first simulation, one or more events of the first test case at predetermined time intervals; and For example, the parameters may describe an actor located at the sidewalk of an intersection, and who's behavior includes facing the intersection and walking into the intersection at predetermined time. Venkatadri at [0026]. wherein monitoring the output from the first simulation of the first test case comprises receiving, from the first simulation, feedback information generated in response to the one or more events of the first test case as the output from the first simulation. As an example, the first scenario output can include a time-stepped log of the perception, prediction, motion planning, and any other operations performed by the autonomous vehicle during the autonomous vehicle testing scenario. Venkatadri at [0032]. Claim 10 Venkatadri discloses: wherein detecting the target condition resulting from the first simulation comprises determining a difference between the output from the first simulation and an expected output value of the first test case; and An evaluation module 310 can evaluate an optimization function 311 to obtain simulation error data 312. The optimization function 311 itself can evaluate (and/or can be configured to evaluate) the first scenario output 308. More particularly, the optimization function (e.g., an error loss function, etc.) can evaluate the differences between the first scenario output 308 (e.g., a pass/fail state, performance values for the plurality of performance metrics, etc.) and ideal values and/or ranges of values for the plurality of performance metrics 302C to obtain the simulation error data 312. Venkatadri at [0100]. wherein detecting the target condition resulting from the first simulation comprises detecting the target condition responsive to the difference exceeding a predetermined threshold. As an example, the first scenario output 308 may deviate from ideal values for a subset of the plurality of performance metrics 302C. The optimization function 311 (e.g., the derivative of an R2 function, an error loss function, a loss and/or optimization function comprising multiple weighted terms, etc.) can evaluate the first scenario output 308 to obtain the simulation error data 312. The simulation error data 312 can correspond to, or otherwise indicate, one performance metric of particular importance. Venkatadri at [0100]. Claim 11 Venkatadri discloses: A system configured for generating test and train cases for a simulator based on simulated events, the system comprising: one or more processors coupled to memory, the one or more processors configured to: The autonomous vehicle computing system 940 can include one or more computing device(s) 941 that are remote from the service entity computing system 920 and the third-party entity computing system 930. The computing device(s) 941 can include one or more processors 943 and a memory 942. Venkatadri at [0164]. Claim 11 further recites substantially the same limitations as recited in claim 1. Accordingly, for at least the same reasons and based on the same prior art, claim 11 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Venkatadri. Claims 12-17 and 19-20 Claims 12-17 and 19-20 recite the system of claim 11 and limitations that are substantially the same as the limitations disclosed in claims 2-7 and 9-10. Accordingly, for at least the same reasons and based on the same prior art as claims 2-7 and 9-10, claims 12-17 and 19-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Venkatadri. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 8 and 18 are rejected under 35 U.S.C. 103 as being obvious over Venkatadri in view Chau, et al. (U.S. Patent No. 9,545,995, hereinafter “Chau”). Claim 8 Venkatadri discloses: wherein the plurality of first simulation parameters and the second simulation parameters comprise at least one of a velocity value, As described previously, the testing parameters 208 can define a step (also referred to as a tick) duration, a vehicle movement strategy (e.g., no operation, move to pickup and drop-off locations, move using a constant speed and straight line, or move using a constant number of steps, etc.). Venkatadri at [0088]. a location value, The present disclosure provides a number of technical effects and benefits. As one example technical effect and benefit, simulation of autonomous vehicle testing scenarios can often include a substantial number of adjustable parameters (e.g., testing entity location/pose/behavior, environmental conditions, etc.). Venkatadri at [0050]. a cloud cover value, As an example, the parameters may describe environmental condition(s) for the autonomous vehicle testing scenario (e.g., humidity, sunlight, cloud coverage, weather, temperature, wind, etc.). Venkatadri at [0026]. a cloud type value, As an example, the parameters may describe environmental condition(s) for the autonomous vehicle testing scenario (e.g., humidity, sunlight, cloud coverage, weather, temperature, wind, etc.). Venkatadri at [0026]. environmental lighting conditions, or Vehicle testing parameters associated with the vehicle testing scenario 602 can specify one or more conditions in which the simulation of the vehicle testing scenario 602 takes place (e.g., rain conditions 612A, night-time conditions 612B, etc.). Venkatadri at [0120]. environmental objects. As another example, the parameters may describe operational condition(s) for the autonomous vehicle testing scenario (e.g., a speed limit, no-stopping zones, a number of lanes, object(s) included in a road network, lateral clearance, underbody clearance, turn radius, a degree of incline/decline, bike lanes, general road conditions (e.g., surface characteristics, types of lanes, laws of the road, etc.), etc.). Venkatadri at [0026]. Venkatadri does not appear to disclose: an altitude value, a roll value, a pitch value, a yaw value, Chau, which is analogous art, discloses: an altitude value, a roll value, a pitch value, a yaw value, In some embodiments, performing the simulation using the first control model to determine how the virtual UAV would move in response to receiving the input command may include identifying a change in an altitude of the virtual UAV, a speed of the virtual UAV, a roll state of the virtual UAV, a pitch state of the virtual UAV, a yaw state of the virtual UAV, or any combination thereof. Chau at col. 2, lines 58-65. Chau is analogous art to the claimed invention because both are related to UAV simulations. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine the autonomous vehicle simulation of Venkatadri with the UAV simulation of Chau to result in a system that operates as disclosed in Venkatadri but with an UAV instead of a ground-based autonomous vehicle. Venkatadri discloses such an application: “It should be noted that the examples of the present disclosure are primarily described in the context of a ground-based autonomous vehicle merely to illustrate the various systems and methods of the present disclosure. Rather, the autonomous vehicle(s) of the present disclosure can be any sort or type of autonomous vehicle, including but limited to ground-based autonomous vehicles, water-based autonomous vehicles, and/or aerial autonomous vehicles (e.g., vertical take-off and landing vehicles, etc.).” Venkatadri at [0043]. Claim 18 Claim 18 recites the system of claim 11 and limitations that are substantially the same as the limitations disclosed in claim 8. Accordingly, for at least the same reasons and based on the same prior art as claim 8, claim 18 is rejected under 35 U.S.C. 103 as being obvious over Venkatadri in view of Chau. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Behm, et al., (U.S. Patent Pub. No. 2009/0006066): “Method and System for Automatic Selection of Test Cases” Rasche, et al, (WIPO Application No. WO2019162293): “Method for identifying critical test cases in the context of highly automated driving” Danna, et al., (U.S. Patent Pub. No. 2022/0198096): “Generating Accurate and Diverse Simulations for Evaluation of Autonomous-Driving Systems” THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH MORRIS whose telephone number is (703)756-5735. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOSEPH MORRIS Examiner Art Unit 2188 /JOSEPH P MORRIS/Examiner, Art Unit 2188 /RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188
Read full office action

Prosecution Timeline

Apr 12, 2022
Application Filed
Jul 25, 2025
Non-Final Rejection — §101, §102, §103
Oct 28, 2025
Applicant Interview (Telephonic)
Oct 28, 2025
Examiner Interview Summary
Dec 01, 2025
Response Filed
Dec 23, 2025
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579465
ESTIMATING RELIABILITY OF CONTROL DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12560921
MACHINE LEARNING PLATFORM FOR SUBSTRATE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
77%
With Interview (+50.0%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month