Prosecution Insights
Last updated: April 19, 2026
Application No. 18/277,029

PERFORMANCE TESTING FOR MOBILE ROBOT TRAJECTORY PLANNERS

Final Rejection §101§102§103
Filed
Aug 11, 2023
Examiner
TRAN, ALYSE TRAMANH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Five AI Limited
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
20 granted / 26 resolved
+24.9% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
51
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
22.4%
-17.6% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to communication for Application No. 18/277,029, filed on 10-OCT-2025. Claims 1-16 are currently pending and have been examined. Claims 1-16 have been rejected as follows. Status of Application This final office action is in response to Applicant’s amendment received by the Office on 10-OCT-2025. Claims 1-16 have been presented in the application, of which, 3-11, and 14 are previously presented/original. Claim 1, 2, 12, 13, 15 and 16 are amended. Accordingly, pending claims 1-16 are addressed herein. Response to Amendment The amendment filed on 10-OCT-2025 has been entered. Claims 1-16 remain pending in the application. Response to Arguments Applicant’s arguments, filed 10-OCT-2025, with respect to the rejections of claims under 103 have been fully considered and are not persuasive. On page 6-7, Applicant argues that the art does not teach "the scenario ground truth generated using the trajectory planner to control an ego agent of the scenario responsive to at least one scenario element of the scenario" as the validation model of Morley is a "perfect driver" output all the information about the scenario, not data that would be provided to computing devices by the perception system. The Examiner disagrees. There is no further description in the claim limitations of the scenario ground truth that exclude the teaching of a “a validation model representing expected behaviors of an idealized human driver”. Additionally, the limitation of “data that would be provided to computing devices by the perception system” is not included in the claims. The broadest reasonable interpretation of “ground truth” includes real-world data, and the validation model of Morley et al. includes real-world data: Paragraph [44], “The set of characteristics and the set of rules may allow the validation model to control a virtual vehicle (i.e. brake, swerve, etc.) as if an ideal human were in control of the virtual vehicle. Such data is available from existing human response (or reaction) research or may be generated from running experiments to test actual human behavior.” Therefore, the “validation model” can still be interpreted as the “ground truth”. On Page 7-8, Applicant argues that Morley does not disclose "receiving one or more performance evaluation rules for the scenario" as the rules in Morley are used to define how the validation model behaves when controlling a vehicle in a scenario, not to evaluate performance. The examiner disagrees. The broadest reasonable interpretation of “performance evaluation rules”, which are not further defined in the claims, include rules for assessing how the vehicle should behave. On Page 7-8, Applicant argues that Morley does not disclose "and at least one activation condition for each performance evaluation rule" as the activation condition in the claim is used to determine when to evaluate a performance evaluation rule (such as within a predefined distance), not a behavior that may be exhibited by a vehicle controlled by a validation model. The Examiner disagrees. The objects the vehicle must react to are being interpreted as the activation conditions for the performance evaluation rules. On Page 8, Applicant argues that Morley does not disclose "determine whether the activation condition of each performance evaluation rule is satisfied over multiple time steps of the scenario" as the handover time is a predetermined time period in a scenario in which the vehicle is controlled by the software or validation model, not to evaluate a performance evaluation rule. The Examiner disagrees. The broadest reasonable interpretation of “determin[ing] whether the activation condition of each performance evaluation rule is satisfied” includes determining when the activation condition is present, thus determining when the object is present. The broadest reasonable interpretation of “over multiple time steps of the scenario” includes a period of time. Therefore, a time period determining when an object is present, or a handover time when the autonomous vehicle would collide with another present object, teaches the claim limitations. On Page 9, Applicant argues that Morley does not disclose "to provide at least one test result, only when its activation condition is satisfied", as the "the validation model passed the driving scenario" is not "only when the activation condition is satisfied". The examiner disagrees. The test result is being interpreted as on whether outcome of the scenario for the validation model indicates that a virtual vehicle under control of the validation model resulted in a collision, and the activation conditions are the objects being present that pose the threat of collision. On pages 10-12, applicant relies on comparison with an example #46 claim 3 that is patent eligible. Arguments relate to example #46 is not persuasive. The claim combination of claim 3 in example 46 contains multiple structures and at least a control step of “automatically operating the sorting gate” (see analysis for step (d) of the cited example 46). In contrast, the instant claims do not contain any additional element to perform a control function that would integrate the claims in practical application. The step of “providing test result when a condition is satisfies” is considered as an insignificant post-solution activity per MPEP 2106.05(g). It is directed to mere outputting of data MPEP 2106.05 (g) 3. Note that selecting information/rule based on types of information/rules and availability of information/rule is also considered to be insignificant extra solution activity. Hence, the claims remain ineligible. On page 13-14, applicant argues that the claims overcome the 101 because they enable automatic evaluation of a trajectory planner for a robot whilst limiting computational resource requirements; Applicant argues that the improvements made over the prior art systems, meaningful limits on the scope of the claim, and the practical application, do not merely apply the abstract idea to a computing device, but is an improvement to a technology that integrates any identifiable abstract ideas into a practical application. Applicant’s arguments concerning technical improvement to the computer via “determining whether the activation condition of each performance evaluation rule is satisfied…” is not persuasive because any improvement to identified mental step is not eligible. Applicant’s arguments concerning improvement support from paragraph 0016 is not persuasive because the specification fails to provide details regarding the manner in which the invention is accomplished. Selecting matching information/rule for further evaluation does not contain particular solution to a problem. Per MPEP 2106.05(a), if it is asserted that the invention improves upon conventional functioning of a computer, or upon conventional technology or technological processes, a technical explanation as to how to implement the invention should be present in the specification. The instant claims are also not similar to McRo, Inc. v. Bandai Namco Games America. In McRo, the claim, as supported by the specification, contains explicit features of how the rules are implemented for technical improvement. It is noted that McRo also contains control step of “…facial expression control of said animated characters”. Because of the arguments above, the prior art rejection and 101 rejections are maintained. Claim Rejections - 35 USC § 101 Claims 1-16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis – Step 1 Claims 1-14 are directed method for evaluating the performance of a trajectory planner for a mobile robot (i.e., a process). Claims 15 is directed to a to a system for evaluating the performance of a trajectory planner for a mobile robot (i.e., a machine). Claims 16 is directed to a to a computer readable medium for evaluating the performance of a trajectory planner for a mobile robot (i.e., a manufacture). Therefore, claims 1-16 are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites: A computer-implemented method of evaluating a performance of a trajectory planner for a mobile robot in a real or simulated scenario, the method comprising: receiving scenario ground truth of the scenario, the scenario ground truth generated using the trajectory planner to control an ego agent of the scenario responsive to at least one scenario element of the scenario; receiving one or more performance evaluation rules for the scenario and at least one activation condition for each performance evaluation rule; and processing, by a test oracle, the scenario ground truth, to determine whether the activation condition of each performance evaluation rule is satisfied over multiple time steps of the scenario wherein each performance evaluation rule is evaluated by the test oracle, to provide at least one test result, only when its activation condition is satisfied The examiner submits that the foregoing bolded limitation(s) constitute a “mental process”, because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “processing …” and “evaluated …”, in the context of this claim encompasses a person assessing the scenario and determining if the performance evaluation rules and activation conditions are satisfied. Accordingly, the claim recites at least 2 abstract ideas. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): A computer-implemented method of evaluating a performance of a trajectory planner for a mobile robot in a real or simulated scenario, the method comprising: receiving scenario ground truth of the scenario, the scenario ground truth generated using the trajectory planner to control an ego agent of the scenario responsive to at least one scenario element of the scenario; receiving one or more performance evaluation rules for the scenario and at least one activation condition for each performance evaluation rule; and processing, by a test oracle, the scenario ground truth, to determine whether the activation condition of each performance evaluation rule is satisfied over multiple time steps of the scenario wherein each performance evaluation rule is evaluated by the test oracle, to provide at least one test result, only when its activation condition is satisfied For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “receiving …”, the examiner submits that these limitations are insignificant extra-solution activities as they are broad enough to include the pre-solution activity gathering data. In particular, the receiving steps are recited at a high level of generality (i.e. as a general receiving scenario, evaluation rule, and activation condition data), and amounts to mere data gathering which is a form of insignificant extra-solution activity. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above, the additional limitations of “receiving…”, the examiner submits that these limitations are insignificant extra-solution activities. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well- understood, routine, conventional activity in the field. The additional limitations “receiving…” are well-understood, routine, and conventional activities as is merely the collection of data. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, the claim is not patent eligible. Dependent claim(s) 2-14 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application as none of the dependent claims narrow the scope to not encompass performance of the limitations in the human mind. Therefore, dependent claims 2-14 are not patent eligible under the same rationale as provided for in the rejection of claim 1. Similarly, claim 15 and 16 are rejected under the same rationale provided for the rejection of claim 1. Therefore, claim(s) 1-16 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-13, 15, 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Morley et al. (US 2019/0213103 A1). Regarding claim 1, Morley et al. teaches: A computer-implemented method of evaluating a performance of a trajectory planner for a mobile robot (element 100) in a real or simulated scenario (Paragraph [46], "each scenario may include real world logged data (for instance, sensor data generated by a perception system, such as perception system 172 of vehicle 100), purely synthetic objects or sensor data created in simulation, or any combination of these"), the method comprising: receiving scenario ground truth of the scenario (Paragraph [44], "For instance, storage system 450 may store a validation model representing expected behaviors of an idealized human driver"; The broadest reasonable interpretation of “ground truth” includes real-world data, and the validation model of Morley et al. includes real-world data: Paragraph [44], “The set of characteristics and the set of rules may allow the validation model to control a virtual vehicle (i.e. brake, swerve, etc.) as if an ideal human were in control of the virtual vehicle. Such data is available from existing human response (or reaction) research or may be generated from running experiments to test actual human behavior.”), the scenario ground truth generated using the trajectory planner to control an ego agent of the scenario responsive to at least one scenario element of the scenario; (Paragraph [14], "To achieve this, the autonomous control software may be compared with a validation model of this idealized human driver based on results of simulations or scenarios") receiving one or more performance evaluation rules for the scenario (Paragraph [44], "The validation model may also include a set of rules for determining how a virtual vehicle should behave ") and at least one activation condition for each performance evaluation rule (Paragraph [44], "These rules may define behaviors such as ... how a virtual vehicle reacts to different objects"); and processing, by a test oracle (element 410), the scenario ground truth, to determine whether the activation condition of each performance evaluation rule is satisfied over multiple time steps of the scenario (Paragraph [3], "identifying a handover time for giving the autonomous control software or the validation model control of the virtual vehicle in the scenario corresponding to a predetermined number of seconds within the scenario before the virtual vehicle would collide with the another object"), wherein each performance evaluation rule is evaluated by the test oracle, to provide at least one test result, only when its activation condition is satisfied (Paragraph [73], "At block 840, whether the validation model passed the driving scenario is determined based on whether outcome of the scenario for the validation model indicates that a virtual vehicle under control of the validation model collided with another object in any one of the plurality of times") Regarding claim 2, Morley et al. teaches: The method of claim 1, wherein the scenario ground truth is processed to determine whether the activation condition of each performance evaluation rule is satisfied over multiple time steps of the scenario (Paragraph [73], "At block 840, whether the validation model passed the driving scenario is determined based on whether outcome of the scenario for the validation model indicates that a virtual vehicle under control of the validation model collided with another object in any one of the plurality of times") for each scenario element of a set of multiple scenario elements (Paragraph [46], "Each scenario may include information defining an environment for a virtual vehicle, such as road information defining characteristics such as shape, location, direction, etc. of a roadway. In addition, each scenario may also include object information defining characteristics of objects such as shape, location, orientation, speed, etc. of objects such as vehicles, pedestrians, bicyclists, vegetation, curbs, lane lines, sidewalks, crosswalks, buildings, etc"), wherein each performance evaluation rule is evaluated only when its activation condition is satisfied for at least one of the scenario elements, and only between the ego agent and the scenario elements for which the activation condition is satisfied (Paragraph [46], "In this regard, the scenarios are not merely vehicles just driving around, but situations in which the response of the vehicle is critical for safety of the vehicle and any other objects") Regarding claim 3, Morley et al. teaches: The method of claim 1, wherein each performance evaluation rule is encoded in a piece of rule creation code (Paragraph [23], "The instructions 134 may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor... The instructions may be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance") as a second logic predicate (Figure 8; element 840) and its activation condition is encoded in the piece of rule creation code as a first logic predicate (Figure 8; element 820), wherein at each time step, the test oracle evaluates the first logic predicate for each scenario element (Paragraph [46], "In this regard, the scenarios are not merely vehicles just driving around, but situations in which the response of the vehicle is critical for safety of the vehicle and any other objects."), and only evaluates the second logic predicate between the ego agent and any scenario element satisfying the first logic predicate (Figure 8; element 820, 840; Paragraph [73], "At block 840, whether the validation model passed the driving scenario is determined based on whether outcome of the scenario for the validation model indicates that a virtual vehicle under control of the validation model collided with another object in any one of the plurality of times") Regarding claim 4, Morley et al. teaches: The method of claim 1, wherein multiple performance evaluation rules (Paragraph [64], "For instance, the set of rules may define whether the model should ignore other objects in a scenario, try to avoid all objects in a scenario equally, or try to avoid some objects in a scenario (such as people, bicyclists, or other vehicles) more than other types of objects (such as curbs, medians, etc.) "), having different respective activation conditions, are received and selectively evaluated by the test oracle according to their different respective activation conditions (Paragraph [16], "The scenarios may be generated as a situation which tests the response to another object which is behaving improperly. In this regard, the scenarios are not merely vehicles just driving around, but situations in which the response of the vehicle is critical for safety of the vehicle and any other objects.") Regarding claim 5, Morley et al. teaches: The method of claim 1, wherein each performance evaluation rule pertains to driving performance (Paragraph [44], "The validation model may also include a set of rules for determining how a virtual vehicle should behave") Regarding claim 6, Morley et al. teaches: The method of claim 1, comprising: rendering on a graphical user interface (GUI) respective result (Paragraph [41-42], " As an example the user may input information using a small keyboard, a keypad, microphone, using visual signals with a camera, or a touch screen…client computing device 440 may be an operations workstation used by an administrator or operator") for the multiple time steps in a time-series, the result at each time step visually indicating one category of at least three categories comprising (Paragraph [42], "to review scenario outcomes, handover times, and validation information as discussed further below "; Interpreting this limitation under the broadest reasonable interpretation per “visually indicating one category”. This includes the instance where only the category when the activation condition is satisfied is visually indicated): a second category when the activation condition is satisfied and the rule is passed (Paragraph [59], "The expected outcome may then be used to evaluate the autonomous control software's performance, or rather, to determine whether the autonomous control software “passed” or “failed” a given scenario. For instance, the autonomous control software may pass a scenario if there is no collision") a third category when the activation condition is satisfied and the rule is failed (Paragraph [60], "All other collisions may be considered a “fail.”") Regarding claim 8, Morley et al. teaches: The method of claim 1, wherein the activation condition of a first performance evaluation rule of the performance evaluation rules is dependent on the activation condition of at least a second performance evaluation rule of the performance evaluation rules (Figure 7; Paragraph [58], "The autonomous control software may select only one response for each scenario to be followed by the virtual autonomous vehicle") Regarding claim 9, Morley et al. teaches: The method of claim 8, wherein the first performance evaluation rule is deactivated when the second performance evaluation rule is active (Figure 7; Paragraph [58], "The autonomous control software may select only one response for each scenario to be followed by the virtual autonomous vehicle"; Paragraph [62], "As illustrated in example 700 of FIG. 7, these general categories of responses may include braking (represented by path A), swerving right (represented by path B), swerving left (represented by path C), braking and swerving right (represented by path D), or braking and swerving left (represented by path E)") Regarding claim 10, Morley et al. teaches: The method of claim 9, wherein the second performance evaluation rule pertains to safety and the first performance evaluation rule pertains to comfort (Paragraph [59-60, 67], "Whether the validation model has passed or failed a scenario using a given one of the responses may be determined using the same or similar rules as for the autonomous control software. As an example, the validation model may pass a scenario based on, for example, any of the following: if there is no collision, if there is no collision and at least some minimum buffer distance between the virtual vehicle and another object, if there is no collision and the vehicle did not need to make an unsafe maneuver to avoid a collision, if there is no collision and the reaction time to begin reacting to a potential collision in a scenario is not too slow, as in the examples discussed above"; All the rules, as cited, can relate to safety and comfort) Regarding claim 11, Morley et al. teaches: The method of claim 1, wherein the scenario elements comprise one or more other agents (Paragraph [58], "This expected outcome may include information such as the final pose of the virtual autonomous vehicle, the final poses of any other vehicles or objects in the scenario, response times, whether there was a collision with any objects") Regarding claim 12, Morley et al. teaches: The method of claim 11, wherein the scenario elements are a set of other agents (Paragraph [58], "This expected outcome may include information such as the final pose of the virtual autonomous vehicle, the final poses of any other vehicles or objects in the scenario, response times, whether there was a collision with any objects") Regarding claim 13, Morley et al. teaches: The method of claim 11, wherein the scenario ground truth is processed to determine whether the activation condition of each performance evaluation rule is satisfied over multiple time steps of the scenario for each scenario element of a set of multiple scenario elements (Paragraph [3], "identifying a handover time for giving the autonomous control software or the validation model control of the virtual vehicle in the scenario corresponding to a predetermined number of seconds within the scenario before the virtual vehicle would collide with the another object"), wherein each performance evaluation rule is evaluated only when its activation condition is satisfied for at least one of the scenario elements (Paragraph [46], "The scenarios may be generated as a situation which tests the response to another object which is behaving improperly "), and only between the ego agent and the scenario elements for which the activation condition is satisfied (Paragraph [46], "In this regard, the scenarios are not merely vehicles just driving around, but situations in which the response of the vehicle is critical for safety of the vehicle and any other objects"), and wherein the activation condition is evaluated for each scenario element to compute, at each time step, an iterable containing identifier of any scenario elements for which the activation condition is satisfied (Paragraph [16-17], "The scenarios may be generated as a situation which tests the response to another object which is behaving improperly... A critical feature for each scenario is the “handover time” or the time when the autonomous control software and the validation model are given control of the vehicle within the scenario. The handover time may be automatically selected for each scenario according to the circumstances of that scenario"), the performance evaluation rule being evaluated by looping over the iterable at each time step (Paragraph [18], "the validation model may run the same scenario under each of a plurality of different responses") Regarding claim 15, Morley et al. teaches: A computer system for evaluating a performance of a trajectory planner for a mobile robot in a real or simulated scenario, the computer system comprising: at least one memory configured to store computer-readable instructions (element 130); and at least one hardware processor coupled to the at least one memory (element 120) and configured to execute the computer-readable instructions, which upon execution cause the at least one hardware processor to implement operations comprising (element 134) receive scenario ground truth of the scenario (Paragraph [44], "For instance, storage system 450 may store a validation model representing expected behaviors of an idealized human driver"; The broadest reasonable interpretation of “ground truth” includes real-world data, and the validation model of Morley et al. includes real-world data: Paragraph [44], “The set of characteristics and the set of rules may allow the validation model to control a virtual vehicle (i.e. brake, swerve, etc.) as if an ideal human were in control of the virtual vehicle. Such data is available from existing human response (or reaction) research or may be generated from running experiments to test actual human behavior.”) the scenario ground truth generated using the trajectory planner to control an ego agent of the scenario responsive to at least one scenario element of the scenario (Paragraph [14], "To achieve this, the autonomous control software may be compared with a validation model of this idealized human driver based on results of simulations or scenarios") receive one or more performance evaluation rules for the scenario (Paragraph [44], "The validation model may also include a set of rules for determining how a virtual vehicle should behave ") and at least one activation condition for each performance evaluation rule (Paragraph [44], "These rules may define behaviors such as ... how a virtual vehicle reacts to different objects") process, by a test oracle, (element 410") the scenario ground truth, to determine whether the activation condition of each performance evaluation rule is satisfied over multiple time steps of the scenario (Paragraph [3], "identifying a handover time for giving the autonomous control software or the validation model control of the virtual vehicle in the scenario corresponding to a predetermined number of seconds within the scenario before the virtual vehicle would collide with the another object") wherein each performance evaluation rule is evaluated by the test oracle, to provide at least one test result, only when its activation condition is satisfied (Paragraph [73], "At block 840, whether the validation model passed the driving scenario is determined based on whether outcome of the scenario for the validation model indicates that a virtual vehicle under control of the validation model collided with another object in any one of the plurality of times") Regarding claim 16, Morley et al. teaches: A non-transitory computer readable medium (element 130; Paragraph [22], "computing device-readable medium, or other medium that stores data that may be read with the aid of an electronic device, such as a hard-drive, memory card, ROM, RAM, DVD or other optical disks, as well as other write-capable and read-only memories") embodying computer program instructions, the computer program instructions (element 134) configured so as, when executed on one or more hardware processors, to implement operations comprising (Paragraph [23], "The instructions 134 may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor"): receiving scenario ground truth of the scenario (Paragraph [44], "For instance, storage system 450 may store a validation model representing expected behaviors of an idealized human driver"; The broadest reasonable interpretation of “ground truth” includes real-world data, and the validation model of Morley et al. includes real-world data: Paragraph [44], “The set of characteristics and the set of rules may allow the validation model to control a virtual vehicle (i.e. brake, swerve, etc.) as if an ideal human were in control of the virtual vehicle. Such data is available from existing human response (or reaction) research or may be generated from running experiments to test actual human behavior.”), the scenario ground truth generated using a trajectory planner to control an ego agent of the scenario responsive to at least one scenario element of the scenario (Paragraph [14], "To achieve this, the autonomous control software may be compared with a validation model of this idealized human driver based on results of simulations or scenarios") receiving one or more performance evaluation rules for the scenario (Paragraph [44], "The validation model may also include a set of rules for determining how a virtual vehicle should behave ") and at least one activation condition for each performance evaluation rule (Paragraph [44], "These rules may define behaviors such as ... how a virtual vehicle reacts to different objects"); and processing, by a test oracle (element 410), the scenario ground truth, to determine whether the activation condition of each performance evaluation rule is satisfied over multiple time steps of the scenario (Paragraph [3], "identifying a handover time for giving the autonomous control software or the validation model control of the virtual vehicle in the scenario corresponding to a predetermined number of seconds within the scenario before the virtual vehicle would collide with the another object"), wherein each performance evaluation rule is evaluated by the test oracle, to provide at least one test result, only when its activation condition is satisfied (Paragraph [73], "At block 840, whether the validation model passed the driving scenario is determined based on whether outcome of the scenario for the validation model indicates that a virtual vehicle under control of the validation model collided with another object in any one of the plurality of times") Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Morley et al. (US 2019/0213103 A1) in view Conde et al. (US 20190047559 A1). Regarding Claim 7, Morley et al. teaches the limitations set forth above, including a method for evaluating robot trajectories with a GUI rendering the results of rules for timesteps according to claim 6 (rejected base claim 6) While Morley et al. teaches the limitations as stated above, it does not expressly disclose: the result is rendered as one colour of at least three different colours corresponding to the at least three categories However, Conde et al. teaches: The method of claim 6, wherein the result is rendered as one colour of at least three different colours corresponding to the at least three categories (Paragraph [42], "relative risks presented as red (indicating that the proposed vehicle maneuver will, barring an evasive maneuver, result in a collision), yellow (indicating that the proposed vehicle maneuver will bring the vehicle within the predetermined threshold distance discussed above with at least one obstacle and so is potentially dangerous), or green (indicating that, barring an unexpected change in direction by any obstacles ahead in the road, the proposed vehicle maneuver may be safely executed without risk of collision)”) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the user computing device to input information, such as on a screen, and shows the results of the scenario performance of Morley et al., to include showing the results in red, yellow, and green based on the result, as taught by Conde et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a method for evaluating the responses of a robot system wherein there is a user computing device with a screen to show the color-coded results of a scenario performance. Regarding Claim 14, Morley et al. teaches the limitations set forth above, including a method for evaluating robot trajectories according to claim 13 (rejected base claim 13) While Morley et al. teaches the limitations as stated above, it does not expressly disclose: wherein the performance evaluation rule is defined as a computational graph applied to one or more signals extracted from the scenario ground truth the iterable being passed through the computational graph in order to evaluate the rule between the ego agent any scenario element satisfying the activation condition However, Conde et al. teaches: The method of claim 13, wherein the performance evaluation rule is defined as a computational graph applied to one or more signals extracted from the scenario ground truth (Figure 5; element 502; Paragraph [60]), the iterable being passed through the computational graph in order to evaluate the rule between the ego agent any scenario element satisfying the activation condition (Figure 5; element 504; Paragraph [60]) It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method of evaluating the system of a robot through running the same scenario under different responses and checking for collisions of Morley et al., to include a neural network for evaluation, as taught by Conde et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a method for evaluating the system of a robot through utilizing a neural network to run the same scenario under different responses and checking for collisions. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSE TRAMANH TRAN whose telephone number is (703)756-5879. The examiner can normally be reached M-F 8:30am-5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.T.T./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Aug 11, 2023
Application Filed
Jul 08, 2025
Non-Final Rejection — §101, §102, §103
Oct 10, 2025
Response Filed
Jan 14, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12569994
ROBOT APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12566071
METHOD OF ROUTE PLANNING AND ELECTRONIC DEVICE USING THE SAME
2y 5m to grant Granted Mar 03, 2026
Patent 12544826
BINDING DEVICE, BINDING SYSTEM, METHOD FOR CONTROLLING BINDING DEVICE, AND COMPUTER READABLE STORAGE MEDIUM STORING PROGRAM
2y 5m to grant Granted Feb 10, 2026
Patent 12539613
DETECTION AND MITIGATION OF PREDICTED COLLISIONS OF OBJECTS WITH USER CONTROL SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12515321
Method for Generating a Training Dataset for Training an Industrial Robot
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+50.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month