Prosecution Insights
Last updated: April 19, 2026
Application No. 18/223,134

FINE-TUNED MODEL TO SOURCE FOUNDATION MODEL ATTRIBUTION

Non-Final OA §101§103
Filed
Jul 18, 2023
Examiner
PHAM, TUAN A
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
583 granted / 697 resolved
+28.6% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
729
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 697 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION This Office Action is in response to the application filed on 07/18/2023 . Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement (IDS) filed on 07/18/2023 and 07/25/2023 has been considered (see form-1449, MPEP 609). Drawings The drawings filed on 07/18/2023 are accepted. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The Claim recites the language of “ causing generating, by a trained model, a training prompt response to a training prompt in a set of training prompts; training, using the training prompt and the training prompt response, an attribution model, the training resulting in a trained attribution model; and attributing, using the trained attribution model and a first prompt response generated by a fine-tuned model in response to a prompt, the fine-tuned model to a foundation model .” Claim 1 recites the limitation of “ causing generating, by a trained model, a training prompt response to a training prompt in a set of training prompts ” , as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the human activity but for the recitation of generic computer components. That is nothing in the claim element precludes the step from practically being performed in the Human activity . For example, “ causing ” in the context of this claim encompasses the user manually creating an input . Similarly, the limitation of training, using the training prompt and the training prompt response, an attribution model, the training resulting in a trained attribution model , as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the human activity but for the recitation of generic computer components. For example, but for the “ training ” in the context of this claim encompasses the user manually determine the training . Also Similarly, the limitation of attributing, using the trained attribution model and a first prompt response generated by a fine-tuned model in response to a prompt, the fine-tuned model to a foundation model , as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the human activity but for the recitation of generic computer components. For example, “ attributing ” in the context of this claim encompasses the user manually interact and input the information. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the human activity but for the recitation of generic computer components, then it falls within the “ Human activity ” and “gathering information” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using one or more computer readable storage e to perform the causing, training, attributing steps. The processor and computer readable storage in those steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of generating input ) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor the causing, training, attributing steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Claim 2 is dependent on independent claim 1 and includes all the limitations of claim 1. Claim 2 recites “ generating, using the trained attribution model and a vocabulary, an additional training prompt; and adding, to the set of training prompts, the additional training prompt ” . The claim language provides only further generating which is directed towards the abstract idea and does not amount to significantly more. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Claim 3 is dependent on independent claim 1 and includes all the limitations of claim 1. Claim 3 recites “ the trained model comprises a trained fine-tuned model ” . The claim language provides only further training model which is directed towards the abstract idea and does not amount to significantly more. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Claim 4 is dependent on independent claim 1 and includes all the limitations of claim 1. Claim 4 recites “ the trained model comprises a trained foundation model and a trained fine-tuned model, and the training prompt response comprises a response of the trained foundation model to the training prompt and a response of the trained fine-tuned model to the training prompt ” . The claim language provides only further the trained model which is directed towards the abstract idea and does not amount to significantly more. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Claim 5 is dependent on independent claim 1 and includes all the limitations of claim 1. Claim 5 recites “ the attributing comprises generating a model attribution confidence score ” . The claim language provides only further the attributing which is directed towards the abstract idea and does not amount to significantly more. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Claim 6 is dependent on independent claim 1 and includes all the limitations of claim 1. Claim 6 recites “ when the IDA width is greater than or equal to twice the decode threshold number, determining the write threshold number and the read threshold number includes establishing the write threshold number and the read threshold number with consistency constraints such that a sum of the write threshold number and the read threshold number is greater than the number of available storage units ” . The claim language provides only further determining number read and write limited which is directed towards the abstract idea and does not amount to significantly more. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Claim 7 is dependent on independent claim 1 and includes all the limitations of claim 1. Claim 7 recites “ the attributing is performed using a pair of prompt responses, the pair of prompt responses comprising the first prompt response and a second prompt response, the second prompt response generated by the foundation model in response to the prompt ”. The claim language provides only further the attributing which is directed towards the abstract idea and does not amount to significantly more. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Regarding claims 7-14 : are essentially the same as claims 1- 6 except that they set forth the claimed invention as a computer program product rather than a method respectively and correspondingly, therefore are rejected under the same reasons set forth in rejections of claims 1- 6 . Regarding claims 15- 20 : are essentially the same as claims 1- 6 except that they set forth the claimed invention as a system rather than a method respectively and correspondingly, therefore are rejected under the same reasons set forth in rejections of claims 1- 6 . Accordingly, the claims 1-20 are not patent eligible. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Clement et al. (US PGPUB 2022/0398462 , hereinafter Clement ), in view of Lyman et al. (US PGPUB 2022/0051114 , hereinafter Lyman ). As per as claim 1 , Clement discloses: A computer-implemented method comprising: causing generating, by a trained model, a training prompt response to a training prompt in a set of training prompts (Clement, e.g., [0032], [0037-0038], “...cloud platform offers various configurations of neural transformer models with attention. Neural transformers models are one type of deep learning that utilizes an attention mechanism....focus on a sub set of features or tokens in an input sequence thereby learning different representations from the different positions of the tokens in an input sequence...”) ; training, using the training prompt and the training prompt response, an attribution model, the training resulting in a trained attribution model (Clement, e.g., [0044-0046], “...uses bi-directional attention which enables the encoder to learn the relationships of the tokens/ subtokens in an input sequence both before and after their occurrence. Classifiers are trained to interpret a model's internal representation into a class label. Since bi-directional attention allows the model's internal representation to depend on all other tokens...”) ; and attributing, using the trained attribution model and a first prompt response generated by a fine-tuned model in response to a prompt, the fine-tuned model to a foundation model (Clement, e.g., [0042], [0050-0052] and [0076-0081], “...generating input sequences of tokens. The pre-processing script may use a tokenizer to turn the user's fine -tuning dataset into a sequence of tokens having the same token base used by the pre-trained deep learning model... the fine -tuning script replaces the output layer of the pre-trained model with a classification layer specific for the task-specific embeddings while reusing all encoder blocks...”) . To make records clearer regarding to the language of “using the training prompt and the training prompt response an attribution model” (although as stated above, Clement functional disclose the feature of “using the training prompt and the training prompt response an attribution model” (Clement, e.g., 0044-0046])). However Lyman , in an analogous art, discloses “ using the training prompt and the training prompt response an attribution model ” (Clement, e.g., [0067-0072], “...data attribut es of an entry 352, 354, 356, and/or 358 can refer to data included in the entry itself or that is otherwise mapped to an identifier included in the entry and can be retrieved from, added to, modified... in training sets used to train processes used by one or more subsystems such as the medical scan image analysis system...” and [0097], “...the training set data can indicate one or more training set identifiers 491 indicating one or more medical scan analysis functions that utilized the medical scan in their training set, and/or indicating a particular version identifier... based on model parameter data 623 of the corresponding medical scan analysis functions...”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Lyman and Clement for generate inference process visualization data for a medical scan indicating an inference process flow of plurality of sub-models applied to the medical scan and further indicating a plurality of inference data for the medical scan generated by applying the plurality of sub-models in accordance with the inference process flow (Lyman, e.g., [abstract]). As per as claim 2 , the combination of Lyman and Clement disclose: The computer-implemented method of claim 1, further comprising: generating, using the trained attribution model and a vocabulary, an additional training prompt (Clement, e.g., [0047-0048], “... training procedure, data normalization and vocabul ary encoding procedures are hyperparameters that are tailored to meet a particular objective. The parameters of the model are the values of the model, such as the weights (e.g., Q, K, V), biases, subtoken and positional...”) ; and adding, to the set of training prompts, the additional training prompt (Clement , e.g., [0067-0070], “... inputs to the decoder block 334 are add ed with the positional embeddings...”) and (Lyman, e.g., [0067-0072]). As per as claim 3 , the combination of Lyman and Clement disclose: The computer-implemented method of claim 1, wherein the trained model comprises a trained fine-tuned model (Clement, e.g., [abstract], [0042], [0050-0052] and [0076-0081], “...generating input sequences of tokens. The pre-processing script may use a tokenizer to turn the user's fine -tuning dataset into a sequence of tokens having the same token base used by the pre-trained deep learning model... the fine -tuning script replaces the output layer of the pre-trained model with a classification layer specific for the task-specific embeddings while reusing all encoder blocks...”) . As per as claim 4 , the combination of Lyman and Clement disclose: The computer-implemented method of claim 1, wherein the trained model comprises a trained foundation model and a trained fine-tuned model, and the training prompt response comprises a response of the trained foundation model to the training prompt and a response of the trained fine-tuned model to the training prompt (Clement, e.g., [abstract], [0042], [0050-0052] and [0076-0081], “...generating input sequences of tokens. The pre-processing script may use a tokenizer to turn the user's fine -tuning dataset into a sequence of tokens having the same token base used by the pre-trained deep learning model... the fine -tuning script replaces the output layer of the pre-trained model with a classification layer specific for the task-specific embeddings while reusing all encoder blocks...” and further see [0096-0098]) . As per as claim 5 , the combination of Lyman and Clement disclose: The computer-implemented method of claim 1, wherein the attributing comprises generating a model attribution confidence score (Lyman, e.g., [0071-0074], “... confidence score data 460, display parameter data 470, similar scan data 480, training set data 490, and/or other data relating to the medical scan...” and further see [0077-0081], “...determined by comparing some or all of confidence score data 460 to a threshold, can be determined by comparing a probability value to a threshold, and/or can be determined by comparing another continuous or discrete value indicating a calculated likelihood ...”) . As per as claim 6 , the combination of Lyman and Clement disclose: The computer-implemented method of claim 1, wherein the attributing is performed using a pair of prompt responses, the pair of prompt responses comprising the first prompt response and a second prompt response, the second prompt response generated by the foundation model in response to the prompt (Lyman, e.g., [0058-0059 ], “...interface feature evaluator system 110 can be operable to generate an ordered image-to- prompt mapping by selecting a set of user interface features to be displayed with each of an ordered set of medical scans. The set of medical scans and the ordered image-to- prompt mapping can be transmitted to a set of client devices. A set of responses can be generated by each client device in response to sequentially displaying each of the set of medical scans in conjunction with a mapped user interface feature indicated in the ordered image-to- prompt mapping via a user interface...” and [0111], [0268], [0283-0284], “...The interface can prompt the user to indicate the appropriate scan category 1120 and/or prompt the user to confirm and/or edit the inferred scan category...”) (the examiner asserts, multiple input/prompts to select the data in the category = pair of prompt responses) and further see [0054], [0095], “...mapping pair in medical label alias database..” ) . Claim 22 is essentially the same as claim 1 except that it set forth the claimed invention as a computer program product rather a method, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claim 1. As per as claim 8 , the combination of Lyman and Clement disclose: The computer program product of claim 7, wherein the stored program instructions are stored in a computer readable storage device in a data processing system, and wherein the stored program instructions are transferred over a network from a remote data processing system (Clement, e.g., [0045], [0067 -0068 ], [0104], [0147], “... stor ed and run locally, stor ed and run by another subsystem 101, and/or stor ed in the medical scan analysis function database 346, where the function and/or parameters of the function can be retrieved from the database by the medical scan diagnosing system...”) . As per as claim 9, the combination of Lyman and Clement disclose: The computer program product of claim 7, wherein the stored program instructions are stored in a computer readable storage device in a server data processing system, and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system, further comprising: program instructions to meter use of the program instructions associated with the request (Lyman, e.g., [0303], and [0309-0320], “...the lesion size, shape, diameter, and/or volume, and/or other characteristics of the lesion such as other abnormality classification data 445 can be determined for each scan, and the changes in these features over time can be measur ed and tracked...determined to shrink, grow, or disappear over subsequent medical scans, and/or new lesions can be detected to appear over subsequent medical scans. Performing such calculations automatically by utilizing the lesion tracking system 3002 can generate more precise measur ements than those generated by a radiologist's visual inspection of one or more medical scans...”) ; and program instructions to generate an invoice based on the metered use (Lyman, e.g., [0077-0080], [0088-0089], [0100], [0141], “generate quality score”) (the examiner asserts generating quality score which is equivalent to generate an invoice based on the metered use) . As per as claim 10, the combination of Lyman and Clement disclose: The computer program product of claim 7, further comprising: generating, using the trained attribution model and a vocabulary, an additional training prompt (Clement, e.g., [0047-0048], “... training procedure, data normalization and vocabul ary encoding procedures are hyperparameters that are tailored to meet a particular objective. The parameters of the model are the values of the model, such as the weights (e.g., Q, K, V), biases, subtoken and positional...”) ; and adding, to the set of training prompts, the additional training prompt (Clement , e.g., [0067-0070], “... inputs to the decoder block 334 are add ed with the positional embeddings...”) and (Lyman, e.g., [0067-0072]). As per as claim 11, the combination of Lyman and Clement disclose: The computer program product of claim 7, wherein the trained model comprises a trained fine-tuned model (Clement, e.g., [abstract], [0042], [0050-0052] and [0076-0081], “...generating input sequences of tokens. The pre-processing script may use a tokenizer to turn the user's fine -tuning dataset into a sequence of tokens having the same token base used by the pre-trained deep learning model... the fine -tuning script replaces the output layer of the pre-trained model with a classification layer specific for the task-specific embeddings while reusing all encoder blocks...”) . As per as claim 12 , the combination of Lyman and Clement disclose: The computer program product of claim 7, wherein the trained model comprises a trained foundation model and a trained fine-tuned model, and the training prompt response comprises a response of the trained foundation model to the training prompt and a response of the trained fine-tuned model to the training prompt (Clement, e.g., [abstract], [0042], [0050-0052] and [0076-0081], “...generating input sequences of tokens. The pre-processing script may use a tokenizer to turn the user's fine -tuning dataset into a sequence of tokens having the same token base used by the pre-trained deep learning model... the fine -tuning script replaces the output layer of the pre-trained model with a classification layer specific for the task-specific embeddings while reusing all encoder blocks...” and further see [0096-0098]) . As per as claim 13, the combination of Lyman and Clement disclose: The computer program product of claim 7, wherein the attributing comprises generating a model attribution confidence score (Lyman, e.g., [0071-0074], “... confidence score data 460, display parameter data 470, similar scan data 480, training set data 490, and/or other data relating to the medical scan...” and further see [0077-0081], “...determined by comparing some or all of confidence score data 460 to a threshold, can be determined by comparing a probability value to a threshold, and/or can be determined by comparing another continuous or discrete value indicating a calculated likelihood ...”). As per as claim 14 , the combination of Lyman and Clement disclose: The computer program product of claim 7, wherein the attributing is performed using a pair of prompt responses, the pair of prompt responses comprising the first prompt response and a second prompt response, the second prompt response generated by the foundation model in response to the prompt (Lyman, e.g., [0058-0059], “...interface feature evaluator system 110 can be operable to generate an ordered image-to- prompt mapping by selecting a set of user interface features to be displayed with each of an ordered set of medical scans. The set of medical scans and the ordered image-to- prompt mapping can be transmitted to a set of client devices. A set of responses can be generated by each client device in response to sequentially displaying each of the set of medical scans in conjunction with a mapped user interface feature indicated in the ordered image-to- prompt mapping via a user interface...” and [0111], [0268], [0283-0284], “...The interface can prompt the user to indicate the appropriate scan category 1120 and/or prompt the user to confirm and/or edit the inferred scan category...”) (the examiner asserts, multiple input/prompts to select the data in the category = pair of prompt responses) and further see [0054], [0095], “...mapping pair in medical label alias database..”) . Claims 15-20 are essentially the same as claims 1- 6 except that they set forth the claimed invention as a system rather a method, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claims 1- 6 . Additional Art Considered The prior art made of record and not relied upon is considered pertinent to the Applicants’ disclosure. The following patents and papers are cited to further show the state of the art at the time of Applicants’ invention with respect to machine learning model management which is generating, by a trained model, a training prompt response to a training prompt in a set of training prompts and using the trained attribution model and a first prompt response generated by a fine-tuned model in response to a prompt, the fine-tuned model to a foundation model. . a. Prakash Selvakumar (US PGPUB 2020/0356838, hereafter Selvakumar ) ; “ Method And System For Training A Machine Learning System Using Context Injection ” discloses “ training a machine-learning (ML) system to function as a chatbot and training and ML system includes providing to the machine-learning system: in a first iteration, a first input-output pair that includes a first input and a first output; and, in a second iteration, a second input-output pair that includes a second input and a second output, where the second input includes the first input-output pair and the second output is different from the first output, so that a context for the second input-output pair is stored in the memory of the ML system ” . Selvakumar further teaches prompt response pair [0077-0078] and training of a typical machine learning system and subsequent use of the system [fig. 1]. Selvakumar also teaches calculating training model [0030], machine learning model and calcu lated using the current input . Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT TUAN A PHAM whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-3173 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 7:45 AM - 6:30 PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Tony Mahmoudi can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-4078 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TUAN A PHAM/ Primary Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Jul 18, 2023
Application Filed
Feb 23, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596679
METHOD AND APPARATUS PROVIDING A TIERED ELASTIC CLOUD STORAGE TO INCREASE DATA RESILIENCY
2y 5m to grant Granted Apr 07, 2026
Patent 12596758
IoT Enhanced Search Results
2y 5m to grant Granted Apr 07, 2026
Patent 12585718
System and Method for Feature Determination and Content Selection
2y 5m to grant Granted Mar 24, 2026
Patent 12572561
METHOD AND APPARATUS FOR SYNCHRONOUSLY UPDATING METADATA IN DISTRIBUTED DATABASE
2y 5m to grant Granted Mar 10, 2026
Patent 12566777
SYSTEMS AND METHODS OFFLINE DATA SYNCHRONIZATION
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+27.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 697 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month