Prosecution Insights
Last updated: April 19, 2026
Application No. 18/462,358

GENERATION OF AUTOMATION TEST SCRIPT USING GENERATIVE ARTIFICIAL INTELLIGENCE

Non-Final OA §103§112
Filed
Sep 06, 2023
Examiner
AGUILERA, TODD
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
The Toronto-Dominion Bank
OA Round
3 (Non-Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
282 granted / 493 resolved
+2.2% vs TC avg
Strong +57% interview lift
Without
With
+57.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
37 currently pending
Career history
530
Total Applications
across all art units

Statute-Specific Performance

§101
16.6%
-23.4% vs TC avg
§103
39.7%
-0.3% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
29.4%
-10.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 493 resolved cases

Office Action

§103 §112
DETAILED ACTION Remarks Applicant presents a request for continued examination dated 31 October 2025 responsive to the 6 August 2025 final Office action (the “Final Office Action”) as well as the 16 October 2025 advisory action. With the request: claims 1, 4, 9, 12, 17 and 20 are amended; claims 3, 11 and 19 are cancelled; and new claims 21-231 are added; Claims 1-2, 4-10, 12-18 and 20-23 remain pending. Claims 1, 9 and 17 are the independent claims. Any unpersuasive arguments are addressed in the “Response to Arguments” section below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 31 October 2025 has been entered. 37 C.F.R. § 1.126 The numbering of claims is not in accordance with 37 CFR 1.126 which requires the original numbering of the claims to be preserved throughout the prosecution. When claims are canceled, the remaining claims must not be renumbered. When new claims are presented, they must be numbered consecutively beginning with the number next following the highest numbered claims previously presented (whether entered or not). The second claim 22 has been renumbered as “23”. 37 C.F.R. § 1.121 Applicant’s claim listing is also not in accordance with 37 CFR 1.121 which requires that added claim language be shown via underlining and deleted claim language be shown via strike-through or double bracketing. In particular: the words “a repository of” are shown deleted from claims 1, 9 and 17 despite the fact that this language was already deleted from the claims with the amendments filed 14 July 2025. The words “for automating” are newly added to line 10 of claim 1 in place of the words “that automates” without any underlining or strike-through. The claims are nonetheless examined in the interest of compact prosecution. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments Applicant argues with respect to claim 1 that paragraphs [0060-0061] of Deakin are “simply a rephrasing of the same scenarios as step sentences for testing” instead of teaching an artifact “called” a description of a plurality of features or creating a feature description from the scenario or data descriptions and therefore “do not establish a feature-description layer based on testing-element descriptions.” (Remarks, p. 10 par. 2). Examiner respectfully disagrees and submits that the Given/When/Then test scenarios are feature descriptions because they describe characteristics or attributes of the testing such as test steps. The fact that they are also rephrasing of some other scenario format does not alter this conclusion. Nor does the fact that Deakin does not explicitly use the phrase “description of a plurality of features.” Applicant argues that in Deakin a feature is “expressly implementation code, not a feature description within the test asset.” (Remarks, p. 10 par. 3). To the extent Applicant is arguing that Deakin’s use of the term “feature” to describe code implies that other aspects of Deakin are not features, examiner respectfully disagrees. The instant claims and specification do not define the term and there is no requirement that the prior use the same terminology in the same manner as the instant application in order to teach what is claimed. Examiner respectfully submits that interpreting the Given-When-Then formatted test scenarios are a description of a plurality of features for the reasons set forth above. Applicant argues that airline names are test data or expected results and treating this information as a description of a plurality of features “conflates test data/expected results with the claimed ‘description of a plurality of features’”. Remarks, p. 11 par. 2). Examiner respectfully disagrees and submits that test data and expected results are features because they are characteristics or attributes of tests. The same goes for test steps. Applicant appears to have some other definition of a “a plurality of features” in mind but has not shared that definition with the examiner. To the extent that definition is inconsistent with the above interpretation, examiner respectfully submits that it is not required by the claims or specification. Applicant argues that “DEAKIN’s code generation is scenario-based, not feature based”. (Remark, p. 11 par. 3). Examiner respectfully submits that references to scenarios do not imply that anything in Deakin is not a feature or “feature-based”. For example, the specification discloses that the approach of the disclosed embodiments “facilitate test scenario design.” (Specification at par. [0097]). The disclosed invention is accordingly “scenario-based” as well. These arguments are thus unpersuasive. Applicant’s arguments with respect to the remaining claims by virtue of their dependence from claim 1, similarity with claim 1 or dependence from a similar claim are unpersuasive for the same reasons. Claim Rejections - 35 USC § 112 The Final Office Action’s § 112 rejections are withdrawn in view of Applicant’s claim amendments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 7-10, 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) (art of record – hereinafter Deakin) and Venkataraman et al. (US 2019/0196949) (art of record – hereinafter Venkataraman). As to claim 1, Deakin discloses an apparatus comprising: a processor that executes instructions stored in a memory to configure the processor (e.g., Deakin, pars. [0114], [0124]) to: create a software test for testing a software program using a description of a plurality of testing elements (e.g., Deakin, par. [0040]: the AI could be instructed to “Create some test scenarios, both positive and negative with this data for a Java function that takes an ICAO and/or IATA code and returns an airline name, and outputs a table of the inputs and expected outputs for such a function” [all of this being a description of a plurality of testing elements]; par. [0041]: the AI may return scenarios and the tabulated test data that can be used in automated testing; par. [0010]: testing the implemented generated implementation source code [software program] against the generated test scenarios) storing the software test within the memory; (see above, the test scenarios [software tests] are necessarily within memory) generate a description of a plurality of features to be included within the software test based on the description of the plurality of testing elements (e.g., Deakin, par. [0059]: the AI may then be instructed to restate these scenarios as GIVEN…WHEN…THEN behavioral scenarios. The AI may then output the following in response: par. [0061]: GIVEN that the IATA/ICAO code is “BAB”, WHEN the function ‘getAirlineName’ is invoked with this code THEN it should return “British Airways”; par. [0067]: the result at this point is that scenario definitions and input with expected outputs are generated, which completes step 1 302 of FIG. 3.) generate an automation script for automating execution of the software test based on a generative artificial intelligence (GenAI) model being executed using the plurality of features (e.g., Deakin, claim 10: the generative AI module comprises a machine learning model; par. [0068]: in step 304 of FIG. 3, the generative AI may be used to write test code. Continuing the example use case, the AI may be instructed using the following input: “In Java code using the Junit framework, write tests for the above scenarios [i.e., the description of a plurality of features, see above]-+ that provide the inputs to the function and verify the return string is what is expected.” In response, the AI may return source code implementation of the test cases it previously described) in response to a request to execute the software test, execute the plurality of testing elements based on the automation script (e.g., Deakin, par. [0068]: the AI may be instructed to implement test cases using the following input [request] “In Java code and using the Junit framework, write unit tests for the above scenarios that provide inputs to the function and verify the return string is what is expected” [note that the unit tests must be executed to perform the verification]; par. [0127]: in block 514, the method 500 includes verifying the generated implementation source code [i.e., executing the aforementioned unit tests]). Deakin does not explicitly disclose to: attach the automation script to the software test in memory. Further, in an analogous art, Venkataraman discloses: to attach the automation script to the software test in the memory, (e.g., Venkataraman, par. [0040]: testing scenarios “(e.g., test cases)” [necessarily in memory]; par. [0008]: assigning the generated automated testing script to the test case). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the test execution of Deakin, to include attach the automation script to the software test in the memory, as taught by Venkataraman, as Venkataraman would provide the advantage of a means of determining which test script to execute when a test case is selected for execution. (See Venkataraman, pars. [0081], [0030]). As to claim 2, Deakin/Venkataraman discloses the apparatus of claim 1 (see rejection of claim 1 above), but Deakin does not explicitly disclose wherein the processor is further configured to: display results of the execution of the software test via a user interface. However, in an analogous art, Venkataraman discloses: wherein the processor is further configured to: display results of the execution of the software test via a user interface (e.g., Venkataraman, par. [0080]: engine 160 may persist results from the execution of the automated testing scripts in a reporting database. The reporting engine may generate reports from the information stored in the reporting database, which can be reviewed by users 112. These reports include details on the performance of the system during the execution of the automated testing scripts and may include any warning messages displayed). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the test execution of Deakin, such that results of execution of the software test are displayed via a user interface, as taught by Venkataraman, as Venkataraman would provide the advantages of a means for a user to review the results of the test and a means of informing the user of any test execution warnings. (See Venkataraman, par. [0080]). As to claim 7, Deakin/Venkataraman the apparatus of claim 1 (see rejection of claim 1 above), but Deakin does not explicitly disclose wherein the processor is configured to: generate an executable software program that executes the plurality of testing elements in sequence via a user interface of a software application. However, in an analogous art, Venkataraman discloses: wherein the processor is configured to: generate an executable software program that executes the plurality of testing elements in sequence via a user interface of a software application (e.g., Venkataraman, par. [0076]: module 710 selects test cases to be executed. Test case sequencer receives the selected test cases. The sequence of test cases is passed to the test execution engine; par. [0003]: executing the test cases through, for example, a test script; par. [0055]: as described above, engine 122 converts the intent into executable automated scripts; par. [0065]: the intent or action of the scenario “(e.g., click on the movie store link)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the test script generation of Deakin, to include generating an executable software program that executes the plurality of testing elements in sequence via a user interface of a software application, as taught by Venkataraman, as Venkataraman would provide the advantages of a means of executing tests based on priority ad a means of maximizing certain testing thresholds and criteria. (See Venkataraman, par [0035]). As to claim 8, Deakin/Venkataraman discloses the apparatus of claim 1 (see rejection of claim 1 above), Deakin further discloses: wherein the request comprises an identifier of a programming language, and wherein the processor is configured to execute the software test via a framework developed in the programming language (e.g., Deakin, par. [0068]: the AI may be instructed to implement test cases using the following input [request] “In Java code and using the Junit framework, write unit tests for the above scenarios that provide inputs to the function and verify the return string is what is expected” [note that the unit tests must be executed to perform the verification]; par. [0127]: in block 514, the method 500 includes verifying the generated implementation source code [i.e., executing the aforementioned unit tests]). As to claim 9, it is method claim whose limitations are substantially the same as claim 1. Accordingly, it is rejected for substantially the same reasons. As to claim 10, it is method claim whose limitations are substantially the same as claim 2. Accordingly, it is rejected for substantially the same reasons. As to claim 15, it is method claim whose limitations are substantially the same as claim 7. Accordingly, it is rejected for substantially the same reasons. As to claim 16, it is method claim whose limitations are substantially the same as claim 8. Accordingly, it is rejected for substantially the same reasons. As to claim 17, it is medium claim whose limitations are substantially the same as claim 1. Accordingly, it is rejected for substantially the same reasons. As to claim 18, it is medium claim whose limitations are substantially the same as claim 2. Accordingly, it is rejected for substantially the same reasons. Claims 4-5, 12-13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) in view of Venkataraman (US 2019/0196949) in further view of Verma (US 2018/0024912) (art of record – hereinafter Verma). As to claim 4, Deakin/Venkataraman discloses the apparatus of claim 1 (see rejection of claim 1 above) wherein the processor is further configured to generate a step definition based on the description of the plurality of features, wherein the step definition is generated in a predefined programming language. However, in an analogous art, Verma discloses: wherein the processor is further configured to generate a step definition based on the description of the plurality of features, wherein the step definition is generated in a predefined programming language (e.g., Verma, pars. [0026-0032]: the Feature File Generation Engine 60 may use the information provided by the user to generate a feature file, as below: Given no user exists with an email of email@person.com When I go to sign in page Then I should see “Bad email or password”; claim 1: generating a step definition of the features file; par. [0033]: to generate a step definition in a programming language, as shown below [note that the step definition includes features of the feature file]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the generation of a test base on a description taught by DeakinVenkataraman to include generate a step definition based on the description, wherein the step definition is generated in a predefined programming language, as taught by Verma, as Verma would provide the advantage of a means of generating tests in a particular framework. (See Verma, pars. [0033-0034]). As to claim 5, Deakin/Venkataraman/Verma discloses the apparatus of claim 4 (see rejection of claim 4 above), but Deakin does not explicitly disclose wherein when the processor generates the automation script, the processor is further configured to: generate the automation script in the predefined programming language based on the step definition. However, in an analogous art, Venkataraman discloses wherein when the processor generates the automation script, the processor is further configured to: generate the automation script in the predefined programming language based on the step definition (e.g., Venkataraman, par. [0059]: step definitions act as skeleton placeholders where automation code blocks may be implemented. The automation code block may be written in a variety of programming language, such as Ruby, Python and so forth. Once generated, the step definitions and respective code blocks may be referred to as an automated testing script; par. [0060]: the automated testing scripts may be implemented automatically). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the step definition generation of Deakin/Verma to include generating the script in the predefined language based on a step definition, as taught by Venkataraman, as Venkataraman would provide the advantage of a means of providing an executable implementation of the step definitions. (See Venkarataman, par. [0059]). As to claim 12, it is method claim whose limitations are substantially the same as claim 4. Accordingly, it is rejected for substantially the same reasons. As to claim 13, it is method claim whose limitations are substantially the same as claim 5. Accordingly, it is rejected for substantially the same reasons. As to claim 20, it is method claim whose limitations are substantially the same as claim 4. Accordingly, it is rejected for substantially the same reasons. Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) in view of Venkataraman (US 2019/0196949) in further view of Kohisseri et al. (US 2022/0188079) (art of record – hereinafter Kohisseri). As to claim 6, Deakin/Venkataraman discloses the apparatus of claim 1 (see rejection of claim 1 above), but does not explicitly disclose wherein the processor is further configured to train the GenAI model to understand connections between features and source code based on execution of the GenAI model using mappings between a plurality of features and a plurality of code modules, respectively. However, in an analogous art, Kohisseri discloses: wherein the processor is further configured to train the GenAI model to understand connections between features and source code based on execution of the GenAI model using mappings between a plurality of features and a plurality of code modules, respectively (e.g., Kohisseri, par. [0029]: neural network models [GenAI models] are trained to comprehend the user inputs 103 and generate the codes required [i.e., to understand the required code for the inputs]; par. [0060]: consider that the auto encode model has been trained using the following inputs: [see table, the words in the left column are features, the codes in the right column include code modules]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the code generation and model of Deakin/Venkataraman to include training the GenAI model to understand connections between features and source code based on execution of the GenAI model on mappings between a plurality of features and a plurality of code modules, respectively, as taught by Kohisseri, as Kohisseri would provide the advantage of a means to configure the generative model to generate the code required for particular features (see Kohisseri, par. [0029]) as suggested by Deakin. (See Deakin, claim 10). As to claim 14, it is method claim whose limitations are substantially the same as claim 6. Accordingly, it is rejected for substantially the same reasons. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) in view of Venkataraman (US 2019/0196949) in further view of Tahvili et al. (US 2024/0241817) (art made of record – hereinafter Tahvili). As to claim 21, Deakin/Venkataraman discloses the apparatus of claim 1 (see rejection of clai/m 1 above) but does not explicitly disclose wherein the processor is configured to: identify the plurality of features from a historical software test having similar requirements to the software test. However, in an analogous art, Tahvili discloses wherein the processor is configured to: identify the plurality of features from a historical software test having similar requirements to the software test (e.g., Tahvili, par. [0034]: a system/method described herein may provide advantages such as reducing manual work associated with software testing by automatically recommends [sic] test case specifications with high accuracy; par. [0068]: the method recommends a corresponding test specification for each requirement, based on previous test cases developed for similar requirements; abstract: features from the test specifications [i.e., identifying the test specification is identifying a plurality of features]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the features of Deakin to include features identified from a historical software test having similar requirements to the software test, as taught by Tahvili, as Tahvili would provide the advantage of a means to adapt previous knowledge for testing a new product. (See Tahvili, par. [0012]). Claim 222 is rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) in view of Venkataraman (US 2019/0196949) in further view of Talukdar et al. (US 2020/0104241) (art made of record – hereinafter Talukdar). As to claim 22, Deakin/Venkataraman discloses the apparatus of claim 1 (see rejection of claim 1 above) but does not explicitly disclose wherein the processor is configured to: display the plurality of features on a display of a device; and receive, via the display, an input from a user to one or more of provide feedback regarding the plurality of features or accept the plurality of features. However, in an analogous art, Talukdar discloses: wherein the processor (e.g., Talukdar, par. [0010]) is configured to: display the plurality of features on a display of a device; (e.g., Talukdar, par. [0045]: the natural language file 130 may be displayed to the user 102 in S426; par. [0044]: natural language file 1230 may include each of the steps in the feature file) and receive, via the display, an input from a user to one or more of provide feedback regarding the plurality of features or accept the plurality of features (e.g., Talukdar, par. [0045]: par. [0045]: the natural language file 130 may be displayed to the user 102 in S426. The natural language file 130 may allow the user to review the test script to confirm it is written as intended. The user 102 may save the natural language file 130 via selection of a save control [receive input to accept the plurality of features]; par. [0046]: after the user reviews the natural language file in S426, the user 102 may execute the natural language file. The user may select the natural language file 130 for execution [receive input to accept the plurality of features]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the features of Deakin such that they are displayed to the user and input from a user to one or more of provide feedback regarding the plurality of features or accept the plurality of features is received via the display, as taught by Talukdar, as Talukdar would provide the advantage of a means to confirm the features are as intended and avoid utilizing features that are not. (See Talukdar, par. [0045]) Claim 233 is rejected under 35 U.S.C. 103 as being unpatentable over Deakin (US 2024/0411673) in view of Venkataraman (US 2019/0196949) in further view of Hicks et al. (US 2024/0330169) (art made of record – hereinafter Hicks). As to claim 23, Deakin/Venkataraman discloses the apparatus of claim 1 (see rejection of claim 1 above) and further discloses the GenAI model (see rejection of claim 1 above) but does not explicitly disclose wherein the software comprises a label, generated by the GenAI model, identifying a purpose of the software test. However, in an analogous art, Hicks discloses: wherein the software comprises a label, generated by the model, identifying a purpose of the software test (e.g., Hicks, par. [0041]: an automated test tool utilizes a machine learning model to automatically generate tags for test cases; par. [0041]: the automatically generated tags may relate to functions tested by the test [testing those functions being a purpose of the test]. A teest case that is directed to I/O functions will be tagged with the ‘I/O’ tag. A test case that is directed to networking functions will be tagged with the ‘networking’ tag). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the GenAI model generating tests taught by Deakin such that it labels the tests with a label identifying a purpose of the test, as taught by Hicks, as Hicks would provide the advantage of a means of easily identifying relevant tests. (See Hicks, par. [0035]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TODD AGUILERA whose telephone number is (571)270-5186. The examiner can normally be reached M-F 11AM - 7:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S Sough can be reached at (571)272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TODD AGUILERA/Primary Examiner, Art Unit 2192 1 Applicant’s claim listing erroneously includes two claims numbered as “22”. 2 See the “37 C.F.R. § 1.126” section above 3 See the “37 C.F.R. § 1.126” section above
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
May 31, 2025
Non-Final Rejection — §103, §112
Jul 14, 2025
Response Filed
Aug 02, 2025
Final Rejection — §103, §112
Oct 06, 2025
Response after Non-Final Action
Oct 31, 2025
Request for Continued Examination
Nov 06, 2025
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596638
SYSTEMS AND METHODS FOR SELECTING TEST COMBINATIONS OF HARDWARE AND SOFTWARE FEATURES FOR FEATURE VALIDATION
2y 5m to grant Granted Apr 07, 2026
Patent 12554623
AUTOMATIC METAMORPHIC TESTING
2y 5m to grant Granted Feb 17, 2026
Patent 12554627
TESTING FRAMEWORK WITH DYNAMIC APPLICABILITY MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12547532
CONFIGURATION-BASED SYSTEM AND METHOD FOR HANDLING TRANSIENT DATA IN COMPLEX SYSTEMS
2y 5m to grant Granted Feb 10, 2026
Patent 12541352
CONTROLLING INSTALLATION OF DRIVERS BASED ON HARDWARE AND SOFTWARE COMPONENTS PRESENT ON INFORMATION TECHNOLOGY ASSETS
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+57.1%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 493 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month