Prosecution Insights
Last updated: April 19, 2026
Application No. 18/564,300

TOOLS FOR PERFORMANCE TESTING AUTONOMOUS VEHICLE PLANNERS

Final Rejection §101§103§112
Filed
Nov 27, 2023
Examiner
AGUILERA, TODD
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
Five Al Limited
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
282 granted / 493 resolved
+2.2% vs TC avg
Strong +57% interview lift
Without
With
+57.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
37 currently pending
Career history
530
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
39.7%
-0.3% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 493 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Remarks Applicant presents a communication dated 12 November 2025 in response to the 12 August 2025 non-final rejection (the “Previous Action”). With the communication: claims 1, 5-6, 16 and 19 are amended; claims 2-4 are cancelled; and figures 2 and 5 are amended. Claim 1 and 5-20 are pending. Claims 1, 16 and 19 are the independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments Applicant argues that it has incorporated the subject matter of claim 4 into the independent claims and that the claims thus are not directed to a judicial exception without significantly more. Examiner respectfully disagrees and points out that the independent claims only refer to selection “via the graphical user interface” as opposed to what was previously recited by cancelled claim 4. The rejection is accordingly maintained. Applicant remaining arguments are moot in view of the withdrawn rejections, withdrawn objections and new ground(s) of rejection below, necessitated by Applicant’s amendments. Drawings The Previous Action’s objections to the drawings are withdrawn in view of Applicant’s drawing amendments. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 6 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As to claim 6, the features of this claim describe rendering a plurality of cards and in the originally filed specification, each of these cards displays a different performance indicator for each run. (See, e.g., Figure 4). However, per claim 1, only a “single” performance indicator for each run is generated. There is no original support for a plurality of cards with generation of only a “single” performance indicator for each run as claimed. Claim Rejections - 35 USC § 101 The Previous Action’s § 101 rejections are withdrawn in view of Applicant’s claim amendments, unless reproduced herein. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 5-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. As to claim 1, the claim recites: a computer implemented method of evaluating a planner performance for an ego robot, the method comprising: receiving first run data of a first run, the run data generated by applying a planner in a scenario of that run to generate an ego trajectory taken by the ego robot in the scenario; extracting scenario data from the first run data to generate scenario data defining the scenario; providing the scenario data to a simulator configured to execute a simulation using the scenario data and implement a second planner to generate second run data; comparing the first run data and the second run data to determine a difference in at least one performance parameter; and generating a performance indicator associated with the run, the performance indicator indicating a level of the determined difference between the at least one performance parameter in first run data and the second run data, wherein the method is carried out for a plurality of runs, and further comprises: generating a single respective performance indicator for each run of the plurality of runs rendering, on a graphical user interface, visual representation of the respective performance indicators, wherein each of the visual respresentation sis provided by a respective single tile; in response to selection, via the graphical user interface, of a particular tile corresponding to a particular run of the plurality of runs, opening a page associated with the particular run Though the claim is directed to a process (Step 1), under the broadest reasonable interpretation in light of the specification, the above underlined elements recite a mental process because the elements are performable by the human mind with aid of pen and paper. The claim therefore recites an abstract idea. None of the additional elements integrate the judicial exception into a practical application. (Step 2A) Referring to the method as “computer-implemented”, rendering “on a user graphical user interface” and selection “via a user interface” amounts to nothing more than implementing the abstract idea using generic computer components. See M.P.E.P. § 2106.05(f). And the test generation unit “providing the scenario data to a simulator…” is insignificant pre-solution activity at least because it only amounts to necessary data gathering. See M.P.E.P. § 2106.05(g). Looking at the claim limitations as an ordered combination yields the same conclusion as that reached when looking at the elements individually. Their collective function is merely to apply the abstract idea in a generic computer along with necessary data gathering. The claim does not include additional elements that amount to significantly more than the judicial exception either (Step 2B), for substantially the same reasons discussed above with respect to a practical application. Note that reevaluation of the extra-solution activity per step 2B does not indicate that this element is anything more than what is well-understood, routine and conventional in the field. See Bondor, “Standardized scenarios for safer roads” at p. 1. (“The difficulty is the range of simulation tools, each of these has its own method of defining scenarios”). As to claims 5-6 the features of these claims do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because rendering a graphical user interface amounts to nothing more than implementing the abstract idea using a generic computer and the remaining features of the claims are only further describe the abstract idea itself. As to claims 7, 8, 14 and 15, the features of these claims do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because they only further describe the abstract idea itself. As to claims 9-11, the features of these claims do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because rendering a graphical user interface amounts to nothing more than implementing the abstract idea using a generic computer and the remaining features of these claims only further describe the abstract idea itself. As to claim 12, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because supplying the scenario data to the simulator configured to execute a third planner having certain modifications to generate third run data is insignificant extra-solution activity for the reasons set forth above with respect to claim 1 and because the remaining limitations are performance by the human mind with aid of pen and paper. As to claim 13, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because they only further describe the abstract idea itself As to claim 16, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more for the same reasons as claim 1 and because the “an apparatus comprising a processor; and a code memory configured to store computer readable instructions for execution by the processor” to perform the functions of claim 1 amounts to nothing more than implementing the abstract idea on a generic computer. As to claim 17, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because the addition of a graphical user interface amounts to nothing more than implementing the abstract idea on a generic computer. As to claim 18, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because they only further describe the abstract idea itself. As to claim 19, the claims recite an abstract idea without integrating the abstract idea into a practical application or amounting to significantly more for the same reasons as claim 1 and because the “a computer program comprising a set of computer readable instructions, which when executed by a processor cause the processor” to perform the functions of claim 1 amounts to nothing more than implementing the abstract idea on a generic computer. As to claim 20, the features of this claim do not add any additional elements integrating the abstract idea into a practical application or amounting to significantly more because they only further describe the abstract idea itself. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5-8 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Chu et al. (US 11,921,504) (art of record – hereinafter Chu) in view of Kolman et al. (US 2008/0155354) (art made of record – hereinafter Kolman). As to claim 1, Chu discloses a computer implemented method of evaluating a planner performance for an ego robot, the method comprising: receiving first run data of a first run, the run data generated by applying a planner in a scenario of that run to generate an ego trajectory taken by the ego robot in the scenario; (e.g., Chu, col. 2 ll. 31-35: the controller may be configured to control an autonomous vehicle [ego robot]; col. 8 ll. 20-31: autonomous controller 102 may include a planner component. The planner component may be configured to determine one or more vehicle trajectories [ego trajectory taken by the ego robot] associated with traveling along a route to a destination; col. 3 l. 62 – col. 4 l. 5: data associated with previous operation of the vehicle, the data may include determined actions by which a controller controlled the vehicle, times associated with the actions, and the like; col. 12 ll. 25-27: the simulation computing system may receive metrics from a vehicle computing system such as with the data received above) extracting scenario data from the first run data to generate scenario data defining the scenario; (e.g., Chu, col. 4 ll. 10-15: a scenario may include a portion of time associated with the previous operation of the vehicle in which the on-vehicle controller determined one or more actions to take based on sensor data, map data and the like; col. 17 ll. 12-14: a list of pre-defined data sets limited based on a type of scenario associated with each of the pre-defined datasets) providing the scenario data to a simulator configured to execute a simulation using the scenario data (e.g., Chu, col. 11 ll. 1-6: data associated with the scenario(s). A scenario may include a portion of time associated with the previous operation of the vehicle; col. 39 ll. 45-50: generating a simulation based on the scenarios and data associated with an operation of a vehicle by the second controller [i.e., the second controller is the controller that controlled the previous operation]; col. 4 ll. 9-10: the data may include data associated with the scenario(s)) and implement a second planner to generate second run data; (e.g., Chu, col. 36 ll. 56-59: in one example, the first controller may include an update “(e.g., software update, modification, etc.)” to a second controller; col. 4 ll. 57-60: the simulation computing system may be configured to evaluate the performance of a planner component of the updated controller [second planner]; col. 37 ll. 60-65: the first controller may control the simulated vehicle through the scenarios associated with the simulation. The simulation computing system determines performance metrics associated with the performance of the first controller controlling the simulated vehicle in the simulation) comparing the first run data and the second run data to determine a difference in at least one performance parameter; (e.g., Chu, col. 39 ll. 62-63: the second autonomous controller “(e.g., control version of the controller)”; col. 14 ll. 64-65: an updated controller “(e.g., candidate controller)”; col. 32 l. 65 – col. 33 l. 8: evaluation component 1046 may compare metrics associated with the candidate controller controlling the simulated vehicle and metrics associated with the control version of the controller controlling the vehicle. Evaluation component 1046 may be configured to determine whether the difference meet or exceed a threshold) and generating a performance indicator associated with the run, the performance indicator indicating a level of the determined difference between the at least one performance parameter in the first run data and the second run data (e.g., Chu, Fig. 7 and associated text, col. 6 ll. 7-8: metrics corresponding to attributes; col. 24 ll. 1-17: a first value 712(1) associated with a difference between a performance of a candidate controller and a control version of the controller with regard to a first performance attribute associated with a first scenario. Based on the determination that the first value 712(1) is equal to or greater than the threshold value, the system may cause the indicator 718(1) of a first color highlight; col. 24 ll. 41-46: based on the determination that a second value 712(2) is less than the first threshold and equal to or greater than the second threshold, the system may cause a second indicator 718(2) of a second color highlight) wherein the method is carried out for a plurality of runs, (e.g., Chu, col. 3 ll. 33-35: the simulation computing system may be configured to periodically evaluate updates to a controller [i.e., by the above method, which includes a run and thus a second evaluation would result in a plurality of runs]) wherein each of the respective visual representations is provided by a respective single tile; (e.g., Chu, Fig. 7 and associated text, col. 24 ll. 15-17: the system may cause the indicator 718(1) of a first color highlight to be presented via the comparison page 702 [by a tile, see figure]; col. 24 ll. 44-46: the system may cause a second indicator 718(2) of a second color highlight to be presented [by a tile, see figure] via the comparison page) in response to selection, via a graphical user interface, of a particular single tile corresponding to a particular run of the plurality of runs opening a page associated with the particular run, (e.g., Chu, Fig. 8 and associated text, col. 25 ll. 24-27: links 728 [single tiles, see figure] associated with a report corresponding to the candidate and/or control version, such as that illustrated in FIG. 8 [the single tile containing the link corresponds to a particular run at least because the data in Figure 7 corresponds to a particular candidate evaluation and because the report can display data corresponding to only the candidate version]). Chu does not explicitly disclose generating a single respective performance indicator for each run of the plurality of runs; rendering, on a graphical user interface, visual representations of the respective performance indicators. However, in ana analogous art, Kolman discloses: generating a single respective performance indicator for each run of the plurality of runs; (e.g., Kolman, Figs. 10, 11 and associated text, par. [0053]: the statistics selection mechanism 281 may list a number of statistics which may be selected for comparison across test runs [see figure, only one (“mean”) is selected, that mean is a performance indicator]) rendering, on a graphical user interface, visual representations of the respective performance indicators, (e.g., Kolman, Fig. 11 and associated text, par. [0053]: the Test Run Compare dialog 280 may also include presentation options 283 such as table, plot, bargraph, etc.; par. [0054]: suppose the mean measurement value [performance indicator] is selected using the statistics selection mechanism, and test runs A through H are selected using the test run selection mechanism. Suppose further that the bargraph presentation option is selected. FIG. 11 is an example when the selections shown in FIG. 10 are applied. As shown, FIG. 11 shows a bargraph 290 of the statistical mean capacitance value over test runs A though H). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the performance indicators for different test runs taught by Chu to include generating a single performance indicator for each run and rendering visual representations of the performance indicators on a graphical user interface, as taught by Kolman, as Kolman would provide the advantage of a means of view and visually comparing a single indicator of performance across different test runs. (See Kolman, par. [0051]). As to claim 5, Chu/Kolman discloses the method according to claim 1 (see rejection of claim 1 above), but does not explicitly disclose wherein the method further comprises: assigning a unique run identifier to each run of the plurality of runs, the unique run identifier associated with a position in the visual representation of the performance indicators when rendered on the graphical user interface. However, in an analogous art, Kolman discloses: wherein the method further comprises: assigning a unique run identifier to each run of the plurality of runs, the unique run identifier associated with a position in the visual representation of the performance indicators when rendered on the graphical user interface (e.g., Kolman, Fig. 11 and associated text [see figure, each bar for a “mean” (performance indicator) is labeled with “Test Run A”, “Test Run B:, etc). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display of performance indicators in a visual representation and runs of Chu to include assigning a unique run identifier to each run of the plurality of runs, the unique run identifier associated with a position in the visual representation of the performance indicators when rendered on a graphical user interface, as taught by Kolman, as Kolman would provide the advantage of a means of indicating to the user which data corresponds to which test run. (See Kolman, Fig. 11). As to claim 6, Chu/Kolman discloses the method according to claim 1 (see rejection of claim 1 above) Chu further discloses: comprising rendering on the graphical user interface, a plurality of examination cards, each of which comprises a plurality of tiles, where each tile provides a visual indication of a metric indicator for a respective different run, wherein for one of the examination cards, the tiles of that examination card provide the visual representation of the performance indicators (e.g., Chu, col. 3 ll. 33-35: the simulation computing system may be configured to periodically evaluate updates to a controller [i.e., by the above method, which includes a run and thus a second evaluation would result in a plurality of runs]; Fig. 7 and associated text, col. 23 ll. 9-11: FIG. 7 is an example interface 700 illustrating a comparison page associated with a performance evaluation [see figure, the table is a card. There are multiple since there are multiple runs. Each cell of the table is a tile and include performance indicators as noted above (i.e., the colors)]). As to claim 7, Chu/Kolman discloses the method according to claim 1 (see rejection of claim 1 above), Chu further discloses: wherein the performance indicator of each level is associated with a visual indication, which is visually distinct from performance indicators of other levels (e.g., Chu, Fig. 7 and associated text, col. 24 ll. 13-17: based on the determination that the first value 712(1) is equal to or greater than the threshold value, the system may cause the indicator 718(1) of a first color highlight; col. 24 ll. 41-46: based on the determination that a second value 712(2) is less than the first threshold and equal to or greater than the second threshold, the system may cause a second indicator 718(2) of a second color highlight). As to claim 8, Chu/Kolman the method according to claim 7 (see rejection of claim 7 above), Chu further discloses: wherein the visually distinct visual indications comprise different colours (e.g., Chu, (e.g., Chu, Fig. 7 and associated text, col. 24 ll. 15-17: the system may cause the indicator 718(1) of a first color highlight; col. 24 ll. 44-46: the system may cause a second indicator 718(2) of a second color highlight). As to claim 12, Chu/Kolman discloses the method according to any preceding claim 1 (see rejection of claim 1 above), Chu further discloses comprising: supplying the scenario data to the simulator configured to execute a third planner to generate third run data, wherein the performance indicator is generated based on a comparison between the first run data and the third run data (e.g., Chu, col. 3 ll. 33-35: the simulation computing system may be configured to periodically evaluate updates to a controller [third planner, because the controller includes a planner and updates can include updates to it as noted above]; col. 2 ll. 43-48: the simulation computing system may validate an updated version of the controller by comparing a performance thereof [third run, because the performance is determined by running a simulation as noted above] to a performance [first run, again see above] of an on-vehicle version of the controller). As to claim 13, Chu/Kolman discloses the method according to any preceding claim 1 (see rejection of claim 1 above), Chu further discloses: wherein the second planner comprises a modified version of the first planner, wherein the modified version of the first planner comprises a modification affecting one or more of its perception ability, prediction ability and computer execution resource (e.g., Chu, Chu, col. 36 ll. 56-59: in one example, the first controller may include an update “(e.g., software update, modification, etc.)” to a second controller; col. 8 ll. 20-22: the autonomous controller 102 may include a perception component, a prediction component, a planner component and a tracker component [so the controller is a planner here because it includes a planning component]; col. 2 ll. 14-15: an update to a component or sub-component of a controller [i.e., any component, including the perception or prediction component]) As to claim 14, Chu/Kolman discloses the method according to claim 1 (see rejection of claim 1 above), Chu further discloses: wherein the comparing the first run data and the second run data to determine a difference in at least one performance parameter comprises using juncture point recognition to determine if there is a juncture in performance (e.g., Chu, col. 22 ll. 39-44: the simulation computing system may validate the component based on a determination that non differences “(e.g., each of the first differences 628 have a value of zero and each of the second differences 630 have a value of zero)”). As to claim 15, Chu/Kolman discloses the method according to claim 1 (see rejection of claim 1 above), Chu further discloses: wherein the run data comprises one or more of: sensor data; (e.g., Chu, col. 12 ll. 7-9: metrics associated with points of a corridor corresponding to the vehicle path, velocities, accelerations; col. 34 ll. 34-39: sensors system(s) may include sensors to measure acceleration of the drive module) perception outputs captured/generated onboard one or more vehicles; and data captured from external sensors. As to claim 16, it is an apparatus claim having limitations substantially the same as claim 1. Accordingly, it is rejected for substantially the same reasons. Further limitations, disclosed by Chu, include: an apparatus comprising a processor; and a code memory configured to store computer readable instructions for execution by the processor (e.g., Chu, col. 35 ll. 41-45: processors(s) 10016 and 1052 may be any suitable processor capable of executing instructions to perform operations described herein; col. 35 l. 65 – col. 36 l. 1: memory 1018 and 1032 may store instructions to implement the methods described herein) to: (see rejection of claim 1 above). As to claim 17, Chu/Kolman discloses the apparatus according to claim 16 (see rejection of claim 16 above), Chu further discloses: comprising a graphical user interface (e.g., Chu, Fig. 7 and associated text). As to claim 18, it is an apparatus claim having limitations substantially the same as claim 13. Accordingly, it is rejected for substantially the same reasons As to claim 19, it is a computer program claim having limitations substantially the same as claim 1. Accordingly, it is rejected for substantially the same reasons. Further limitations, disclosed by Chu, include: a computer program comprising a set of computer readable instructions, which when executed by a processor cause the processor (e.g., Chu, col. 35 ll. 41-45: processors(s) 10016 and 1052 may be any suitable processor capable of executing instructions to perform operations described herein; col. 35 l. 65 – col. 36 l. 1: memory 1018 and 1032 may store instructions to implement the methods described herein) to: (see rejection of claim 1 above). As to claim 20, it is an apparatus claim having limitations substantially the same as claim 7. Accordingly, it is rejected for substantially the same reasons Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Chu (US 11,921,504) in view of Kolman (US 2008/0155354) in further view of Rye et al. (US 2009/0125825) (art of record – hereinafter Rye). As to claim 9, Chu/Kolman discloses the method according to claim 7 (see rejection of claim 7 above), but does not explicitly disclose comprising rendering on a graphical user interface a key, which identifies the levels and their corresponding visual indications. However, in an analogous art, Rye discloses comprising rendering on a graphical user interface a key, which identifies the levels and their corresponding visual indications (e.g., Rye, par. [0033]: the metrics include deviation from historical performance; Fig. 2 and associated text, par. [0034]: for the selected metric, the legend 204 [key] identifies how different colors or other indicators correspond to different values of the selected metric. legend 204 could indicate that green colors are used at the lower end of the legend “(-1 standard deviation)”, white colors are used in the central area of the legend, and ref colors are used at the upper end of the legend “(+ 1 standard deviation)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display of visual indications indicating levels of Chu to include rendering on a graphical user interface a key, which identifies the levels and their corresponding visual indications, as taught by Rye, as Rye would provide the advantage of a means of defining the meanings of the visual indications to a user viewing in the interface. (See Rye, par. [0034], Fig. 2). As to claim 10, Chu/Kolman/Rye discloses the method according to claim 9 (see rejection of claim 9 above), Chu further discloses: comprising rendering on the graphical user interface, a visual representation of the performance indicators (e.g., Chu, Fig. 7 and associated text, col. 24 ll. 15-17: the system may cause the indicator 718(1) of a first color highlight to be presented via the comparison page 702; col. 24 ll. 44-46: the system may cause a second indicator 718(2) of a second color highlight to be presented via the comparison page). As to claim 11, Chu/Kolman/Rye discloses the method according to claim 10 (see rejection of claim 10 above), but does not explicitly disclose wherein the method further comprises: assigning a unique run identifier to each run of the plurality of runs, the unique run identifier associated with a position in the visual representation of the performance indicators when rendered on a graphical user interface. However, in an analogous art, Kolman discloses: wherein the method further comprises: assigning a unique run identifier to each run of the plurality of runs, the unique run identifier associated with a position in the visual representation of the performance indicators when rendered on a graphical user interface (e.g., Kolman, Fig. 11 and associated text [see figure, each bar for a “mean” (performance indicator) is labeled with “Test Run A”, “Test Run B:, etc.]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the display of performance indicators in a visual representation and runs of Chu to include assigning a unique run identifier to each run of the plurality of runs, the unique run identifier associated with a position in the visual representation of the performance indicators when rendered on a graphical user interface, as taught by Kolman, as Kolman would provide the advantage of a means of indicating to the user which data corresponds to which test run. (See Kolman, Fig. 11). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TODD AGUILERA whose telephone number is (571)270-5186. The examiner can normally be reached M-F 11AM - 7:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S Sough can be reached at (571)272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TODD AGUILERA/Primary Examiner, Art Unit 2192
Read full office action

Prosecution Timeline

Nov 27, 2023
Application Filed
Aug 09, 2025
Non-Final Rejection — §101, §103, §112
Nov 12, 2025
Response Filed
Mar 09, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596638
SYSTEMS AND METHODS FOR SELECTING TEST COMBINATIONS OF HARDWARE AND SOFTWARE FEATURES FOR FEATURE VALIDATION
2y 5m to grant Granted Apr 07, 2026
Patent 12554623
AUTOMATIC METAMORPHIC TESTING
2y 5m to grant Granted Feb 17, 2026
Patent 12554627
TESTING FRAMEWORK WITH DYNAMIC APPLICABILITY MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12547532
CONFIGURATION-BASED SYSTEM AND METHOD FOR HANDLING TRANSIENT DATA IN COMPLEX SYSTEMS
2y 5m to grant Granted Feb 10, 2026
Patent 12541352
CONTROLLING INSTALLATION OF DRIVERS BASED ON HARDWARE AND SOFTWARE COMPONENTS PRESENT ON INFORMATION TECHNOLOGY ASSETS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+57.1%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 493 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month