Prosecution Insights
Last updated: April 19, 2026
Application No. 18/097,070

INCREMENTAL MACHINE LEARNING TRAINING

Final Rejection §102§103
Filed
Jan 13, 2023
Examiner
CHEN, KUANG FU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Home Depot Product Authority LLC
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
203 granted / 252 resolved
+25.6% vs TC avg
Strong +67% interview lift
Without
With
+67.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
289
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendments filed 1/23/2026 have been entered. Claims 1, 3, 10, 11, and 12 have been amended. Claims 1-20 are pending in the application. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 5-7, 10, 12, and 14-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhang et al. (hereinafter Zhang), US 2020/0175362 A1. Zhang was disclosed in an IDS dated 6/13/2023. Regarding independent claim 1, Zhang discloses a computing system for training a machine learning model ([0026] computing system 100 for which lifelong learning techniques implemented), the system comprising: a processor ([0031-0033] computing system comprise server 104 represented by electronic device 200 with processor 210); and a memory storing instructions that, when executed by the processor, cause the computing system to perform operations comprising ([0031-0033] memory 230 storing instructions that when executed by the processor 210 cause computing system to implement): receiving a first version of a machine learning model ([0052], [0055] With a given small network, the system 500 for multi-task based lifelong learning implemented by electronic device 200 comprising learning an initial model on a first given task; [0074] system 500 obtains the previously trained model…as the initial/existing model (receiving a first version of a machine learning model)); conducting first training on the machine learning model first version based on a first training set ([0055] With a given small network, the system 500 learns an initial model (on the machine learning model first version) on a first given task (conducting first training); [0058] using previous training datasets D1…D(t-1)); adding a layer to the machine learning model first version after conducting the first training to create a machine learning model second version ([0077] the system incorporate inherent correlations between the existing task and the new task and identify the added layer as a task-specific layer for the new task, [0078] The expanded network architecture may include adding a layer to the network architecture and expanding one or more existing layers of the network architecture); conducting second training on the machine learning model second version based on a second training data ([0068]-[0069] train the model for the child network architectures; [0077] the system train the ML model to perform the new task (conducting second training on the machine learning model second version) using training data for the new task (based on a second training data) without access to the training data for the old task); and deploying the machine learning model after the second training ([0070] the addition of each new task require the deployment of a new deep learning model). Regarding dependent claim 5, Zhang further discloses the computing system of claim 1, wherein the layer comprises a fully-connected layer (FIG. 4, FIG. 5 [0049-0050] the bolded network nodes and lines represent the old or existing network architecture originally in place to perform the old task 405, while the non-bolded nodes and lines represent the expansions to the old or existing network architecture in order to perform the new task 410). Regarding dependent claim 6, Zhang further discloses the computing system of claim 1, wherein the layer is a first layer, wherein the operations further comprise: adding a second layer to the machine learning model second version to create a machine learning model third version ([0068] maximum expanding layers are set as 2 and 3; [0078] The expanded network architecture may include adding a layer to the network architecture and expanding one or more existing layers of the network architecture); and conducting third training on the machine learning model third version ([0077] the system may train the ML model to perform the new task using training data for the new task without access to the training data for the old task); wherein the deploying comprises deploying the machine learning model after the third training ([0070] the addition of each new task may require the deployment of a new deep learning model). Regarding dependent claim 7, Zhang further discloses the computing system of claim 6, wherein the first training comprises first training data and the second training comprises second training data and the third training comprises third training data, wherein the first training data, the second training data, and the third training data are different from one another ([0058] Given a sequence of T tasks, task at time point t::1, 2, ... , T with Nt images comes with dataset Dt = {xii, yit} . Specifically, for task t, yi t{1, ... , K} is the label for the ith sample xi !Rd tin task t, where R represents the real number space and dt is a dimension of R. The training data matric is denoted by Xt for Dt, ie., Xt=(x1 t, x2 t, ... , xN t t,). When the dataset of task tis identified, the previous training datasets D1, ... , Dt-1 may not be available any more, establishing different datasets for sequential training iterations). Regarding dependent claim 12, Zhang further discloses the method of claim 10, wherein the first training data comprises a first type of data and the second training data comprises a second type of data, wherein the first training data is different from the second training data ([0058] disclosing different dataset Dt for a sequence of tasks; [0074] disclosing different tasks such as old/existing task of food recognition utilizing a first type of data/images, and new task of wine recognition utilizing a second type of data/images). Regarding claims 10 and 14-16, these are method claims that are substantially the same as the computing system claims 1, and 5-7, respectively. Thus, claims 10 and 14-16 are rejected for the same reasons as claims 1 and 5-7. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 3-4, 8-9, 11, 13, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Zhu et al. (hereinafter Zhu), US 2021/0374518 A1. Zhu was disclosed in an IDS dated 6/13/2023. Regarding dependent claim 2, Zhang teaches all the elements of claim 1. Zhang does not expressly teach wherein the machine learning model comprises a plurality of multi-directional transformer encoders. However, Zhu teaches a machine learning model comprises a plurality of multi-directional transformer encoders ([0519] machine learning models used by system 3800 may include machine learning model(s) using linear regression, logistic regression, decision trees, support vector machines (SVM), Naïve Bayes, k-nearest neighbor (Knn), K means clustering, random forest, dimensionality reduction algorithms, gradient boosting algorithms, neural networks (e.g., auto-encoders, convolutional, recurrent, perceptrons, Long/Short Term Memory (LSTM), Hopfield, Boltzmann, deep belief, deconvolutional, generative adversarial, liquid state machine, etc.), and/or other types of machine learning models). Because Zhu and Zhang address layers within a neural network, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings wherein a machine learning model comprises a plurality of multi-directional transformer encoders as suggested by Zhu into Zhang’s computing system, with a reasonable expectation of success, such that the machine learning model comprises a plurality of multi-directional transformer encoders. This modification would have been motivated by the desire to provide a more robust system that handles various type machine learning models (Zhu [0519]). Regarding dependent claim 3, Zhang further teaches the computing system of claim 1, wherein the first training data is different from the second training data ([0058] disclosing different dataset Dt for a sequence of tasks; [0074] disclosing different tasks such as old/existing task of food recognition utilizing a first type of data/images (wherein the first training data), and new task of wine recognition utilizing a second type of data/images (is different from the second training data)). Zhang does not expressly teach wherein the first training data comprises a first type of data of a set of documents and the second training data comprises a second type of data of the set of documents. However, Zhu teaches utilizing training data comprising different types of data of a set of documents ([0508] "DICOM objects may contain anywhere from one to hundreds of images or other data types" thereby teaching sets of medical documents containing diverse data types; [0560]-[0562] teaching models trained on customer or patient data files/imaging data, where patient records are broadly understood as documents). Because Zhu and Zhang address layers within a neural network, accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Zhu's teachings of utilizing different types of data from a set of documents (e.g., imaging data and non-imaging data from DICOM objects/patient records) as the first and second training data in Zhang's lifelong learning system to teach wherein the first training data comprises a first type of data of a set of documents and the second training data comprises a second type of data of the set of documents. This modification would have been motivated by the desire to provide a more robust system capable of sequentially learning to extract multi-modal information from comprehensive medical document structures without catastrophic forgetting (Zhu [0002], [0508]). Regarding dependent claim 4, Zhang, in view of Zhu, teach the computing system of claim 3, wherein the first training data comprises a first type of information respective of a plurality of entities and the second training data comprises a second type of information respective of the plurality of entities (see Zhu [0560] pre-trained models 3806 may have been trained, on-premise, using customer or patient data generated on-premise, [0559] model training 3714 may include retraining or updating an initial model 4104 using new training data (e.g., new input data, such as customer dataset 4106), [0562] In at least one embodiment, customer dataset 4106 (e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training wherein the customers/patients correspond to the plurality of entities. This modification would have been motivated by the desire to address synchronization and communication overhead necessary to ensure accuracy among the increased number of devices affecting the overall training times Zhu [0002]). Regarding dependent claim 8, Zhang teaches all the elements of claim 7. Zhang does not expressly teach wherein: the first training data comprises a first type of information respective of a plurality of entities; the second training data comprises a second type of information respective of the plurality of entities; and the third training data comprises a third type of information respective of the plurality of entities. However, Zhu teaches a first training data comprises a first type of information respective of a plurality of entities ([0562] and pre-trained model 3806 may be referred to as initial model 4104, para [0560] pre-trained models 3806 may have been trained, on­ premise, using customer or patient data generated on-premise, [0562] customer dataset 4106 {e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility)); a second training data comprises a second type of information respective of the plurality of entities ([0559] model training 3714 may include retraining or updating an initial model 4104 (e.g., a pre-trained model) using new training data (e.g., new input data, such as customer dataset 4106, and/or new ground truth data associated with input data}, [0562] ln at least one embodiment, customer dataset 4106 {e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training 3714 (which may include, without limitation, transfer learning) on initial model 4104 to generate refined model 4112); and a third training data comprises a third type of information respective of the plurality of entities ([0559] model training 3714 may include retraining or updating an initial model 4104 (e.g., a pre-trained model) using new training data (e.g., new input data, such as customer dataset 4106, and/or new ground truth data associated with input data), [0562] In at least one embodiment, customer dataset 4106 (e.g., imaging data, genomics data, sequencing data, or other data types generated by devices al a facility) may be used to perform model training 3714 {which may include, without limitation, transfer learning) on initial model 4104 to generate refined model 4112). Because Zhu and Zhang address the issue of various information associated with training data, accordingly, it would have been obvious to one of ordinary skill in the art to incorporate the teachings wherein a first training data comprises a first type of information respective of a plurality of entities; a second training data comprises a second type of information respective of the plurality of entities; and a third training data comprises a third type of information respective of the plurality of entities as suggested by Zhu into Zhang’s computing system, with a reasonable expectation of success, such that the first training data comprises a first type of information respective of a plurality of entities; the second training data comprises a second type of information respective of the plurality of entities; and the third training data comprises a third type of information respective of the plurality of entities. This modification would have been motivated by the desire to address synchronization and communication overhead necessary to ensure accuracy among the increased number of devices affecting the overall training times (Zhu [0002]). Regarding dependent claim 9, Zhang teaches all the elements of claim 1. Zhang does not expressly teach wherein: the first version of the machine learning model is randomly-initialized; and deploying the machine learning model comprises deploying the machine learning model in association with a plurality of documents; and the first training and the second training comprise use of training data selected from the plurality of documents. However, Zhu teaches a first version of a machine learning model is randomly-initialized ([0101] raining framework 904 trains an untrained neural network 906 and enables it to be trained using processing resources d ascribed herein to generate a trained neural network 908. In at least one embodiment, weights may be chosen randomly); and deploying the machine learning model comprises deploying the machine learning model in association with a plurality of documents ([0508] DICOM objects may contain anywhere from one to hundreds of images or other data types, [0509] a request may include input data (and associated patient data, in some examples) that is necessary to perform a request, and/or may include a selection of application(s) and/or machine learning models to be executed in processing a request); and a first training and a second training comprise use of training data selected from the plurality of documents ([0560] pre-trained models 3806 may have been trained, on-premise, using customer or patient data generated on-premise, [0562] In at least one embodiment, customer dataset 4106 (e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training 3714 (which may include, without limitation, transfer learning) on initial model 4104 to generate refined model 4112). Because Zhu and Zhang address the issue of various information associated with training data, accordingly, it would have been obvious to one of ordinary skill in the art to incorporate the teachings wherein a first version of a machine learning model is randomly-initialized; and deploying the machine learning model comprises deploying the machine learning model in association with a plurality of documents; and a first training and a second training comprise use of training data selected from the plurality of documents as suggested by Zhu into Zhang’s computing system, with a reasonable expectation of success, such that wherein: the first version of the machine learning model is randomly-initialized; and deploying the machine learning model comprises deploying the machine learning model in association with a plurality of documents; and the first training and the second training comprise use of training data selected from the plurality of documents. This modification would have been motivated by the desire to address synchronization and communication overhead necessary to ensure accuracy among the increased number of devices affecting the overall training times (Zhu [0002]). Regarding claims 11, 13, 17-18, these are method claims that are substantially the same as the computing system claims 2, 4, 8-9, respectively. Thus, claims 11, 13, 17-18 are rejected for the same reasons as claims 2, 4, 8-9. Regarding independent claim 19, Zhang teaches a method comprising: receiving a first version of a machine learning model; conducting first training on the machine learning model first version using first training data; adding a layer to the machine learning model first version after conducting the first training to create a machine learning model second version; conducting second training on the machine learning model second version using second training data; and deploying the machine learning model after the second training (as mapped above for claim 1). Zhang does not expressly teach receiving a randomly-initialized first version of a machine learning model, the first training data comprising a first type of information respective of a plurality of documents, and the second training data comprising a second type of information respective of the plurality of documents. However, Zhu teaches receiving a randomly-initialized first version of a machine learning model ([0101] training framework 904 trains an untrained neural network 906 and enables it to be trained using processing resources described herein to generate a trained neural network 908. In at least one embodiment, weights may be chosen randomly), a first training data comprising a first type of information respective of a plurality of documents ([0560] pre-trained models 3806 may have been trained, on-premise, using customer or patient data generated on-premise, [0562] In at least one embodiment, customer dataset 4106 (e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training 3714 (which may include, without limitation, transfer learning) on initial model 4104 to generate refined model 4112), and the second training data comprising a second type of information respective of the plurality of documents ([0560] pre-trained models 3806 may have been trained, on-premise, using customer or patient data generated on-premise, [0562] In at least one embodiment, customer dataset 4106 (e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training 3714 (which may include, without limitation, transfer learning) on initial model 4104 to generate refined model 4112). Because Zhu and Zhang address the issue of various information associated with training data, accordingly, it would have been obvious to one of ordinary skill in the art to incorporate the teachings receiving a randomly-initialized first version of a machine learning model, a first training data comprising a first type of information respective of a plurality of documents, and a second training data comprising a second type of information respective of the plurality of documents as suggested by Zhu into Zhang’s computing system, with a reasonable expectation of success, such that receiving a randomly-initialized first version of a machine learning model, the first training data comprising a first type of information respective of a plurality of documents, and the second training data comprising a second type of information respective of the plurality of documents. This modification would have been motivated by the desire to address synchronization and communication overhead necessary to ensure accuracy among the increased number of devices affecting the overall training times (Zhu [0002]). Regarding dependent claim 20, Zhang, in view of Zhu, further teaches the method of claim 19, wherein the layer is a first layer, wherein the method further comprises: adding a second layer to the machine learning model second version to create a machine learning model third version (see Zhang [0077] the system may incorporate inherent correlations between the existing task and the new task and identify the added layer as a task-specific layer for the new task, [0068], [0078] "The expanded network architecture may include adding a layer to the network architecture and expanding one or more existing layers of the network architecture); and conducting third training on the machine learning model third version using third training data (see Zhang [0077] the system may train the ML model to perform the new task using training data for the new task without access to the training data for the old task), the third training data comprising a third type of information respective of the plurality of documents (see Zhu [0559] model training 3714 may include retraining or updating an initial model 4104 (e.g., a pre-trained model) using new training data (e.g., new input data, such as customer dataset 4106, and/or new ground truth data associated with input data), [0562] In at least one embodiment, customer dataset 4106 (e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training 3714 (which may include, without limitation, transfer learning) on initial model 4104 to generate refined model 4112); wherein the deploying comprises deploying the machine learning model after the third training (see Zhang [0070] the addition of each new task may require the deployment of a new deep learning model). Response to Arguments Applicant’s amendments to claim 11 is persuasive, consequently the claim objection set forth in the Office Action dated 9/24/2025 is hereby withdrawn. Applicant’s arguments filed in Remarks dated 1/21/2026 with respect to the rejection of claim 1 under 35 U.S.C. 102 have been fully considered but are not persuasive. Applicant argues that Zhang does not disclose "conducting second training on the machine learning model second version based on a second training data," asserting that Zhang’s "ML model" is merely a first version trained on new tasks without access to the old training database, and therefore Zhang does not teach training a second version of the ML model based on a distinguishable second training data. Examiner respectfully disagrees. Applicant’s assertion conflates the initial machine learning model with the model after the architecture has been structurally expanded for a new task. The claims recite a chronological pipeline that is directly mirrored in Zhang's explicit disclosures: Receiving a first version and conducting first training based on first training data: Zhang explicitly discloses an initial model that has already been trained on an initial task using an initial dataset. Specifically, [0055] discloses "learn[ing] an initial model on a first given task" and [0074] "obtain[ing] the previously trained model... as the initial/existing model", [0058] the dataset for the previous task D_(t-1) corresponds to the claimed "first training data". Adding a layer to create a second version: When encountering a new task, Zhang explicitly teaches structurally modifying this first version. Zhang states, "The expanded network architecture may include adding a layer to the network architecture and expanding one or more existing layers" (Zhang [0078]) to create "child network architectures" (Zhang [0068]). This structurally expanded child network architecture directly corresponds to the claimed "machine learning model second version." Conducting second training on the second version based on second training data: Zhang then discloses training this expanded architecture (the second version) on the new task. Zhang states the system will "train the model for the child network architectures" (Zhang [0069]) and "train the ML model to perform the new task using training data for the new task" (Zhang [0077]). The "training data for the new task" corresponds to the claimed "second training data." Furthermore, Applicant's argument that the respective training datasets are not distinguishable is overcome by Zhang's express teachings. Zhang defines a sequence of tasks with distinct datasets: "Given a sequence of T tasks, task at time point t ... comes with dataset D_(t) ... the previous training datasets D_1 ... D_(t-1) may not be available any more" (Zhang [0058]). Zhang further provides concrete examples distinguishing the datasets, such as an old training database for "food recognition" versus a new dataset/images for "wine recognition" (Zhang [0074]). Thus, the first and second training datasets are indisputably distinct. Because Zhang explicitly teaches adding a layer to a first version of a model to create a second version, and then training that second version on distinct second training data, the 35 USC 102 rejection is proper and is maintained. Applicant's arguments regarding the 35 USC 103 rejections rely on the same alleged deficiencies in Zhang and are therefore similarly unpersuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUANG FU CHEN whose telephone number is (571)272-1393. The examiner can normally be reached M-F 9:00-5:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KC CHEN/Primary Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jan 13, 2023
Application Filed
Sep 22, 2025
Non-Final Rejection — §102, §103
Jan 21, 2026
Response Filed
Feb 27, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579425
PARAMETERIZED ACTIVATION FUNCTIONS TO ADJUST MODEL LINEARITY
2y 5m to grant Granted Mar 17, 2026
Patent 12566994
SYSTEMS AND METHODS TO CONFIGURE DEFAULTS BASED ON A MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12561593
METHOD FOR DETERMINING PRESENCE OF A SIGNATURE CONSISTENT WITH A PAIR OF MAJORANA ZERO MODES AND A QUANTUM COMPUTER
2y 5m to grant Granted Feb 24, 2026
Patent 12561561
Mapping User Vectors Between Embeddings For A Machine Learning Model for Authorizing Access to Resource
2y 5m to grant Granted Feb 24, 2026
Patent 12561497
AUTOMATED OPERATING MODE DETECTION FOR A MULTI-MODAL SYSTEM WITH MULTIVARIATE TIME-SERIES DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+67.0%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month