Prosecution Insights
Last updated: April 19, 2026
Application No. 18/887,291

METHOD OF TRAINING LARGE MODEL-BASED TEXT RETRIEVAL MODEL, METHOD OF RETRIEVING TEXT, DEVICE AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
Sep 17, 2024
Examiner
HARMON, COURTNEY N
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
72%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
262 granted / 425 resolved
+6.6% vs TC avg
Moderate +10% lift
Without
With
+10.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
22 currently pending
Career history
447
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 425 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to communications filed December 15,2025. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 15,2025 has been entered. Response to Arguments Applicant's arguments filed December 15,2025regarding the rejection of claims 1-20 under 35 U.S.C 102(a)(2) and 35 U.S.C 103 have been fully considered but they are moot in view of the new grounds of rejection. Status of Claims Claim 1-20 are pending, of which claims 1, 8, 10, 17, 19, and 20 are in independent form. Claims 1-20 are rejected under 35 U.S.C 102(a)(2) and 35 U.S.C 103. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 17-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Joynt (US 2025/0117587) (hereinafter Joynt). Regarding claim 17, Joynt teaches an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor, are configured to cause the at least one processor to implement the method of claim 8 (see Fig. 1, para [0020-0021], discloses processor and memory). Regarding claim 18, Joynt teaches a device of claim 17. Joynt further teaches wherein the target text retrieval model is a large language model (see para [0015], para [0025], discloses large language model), and the instructions are further configured to cause the at least one processor (see Fig.1, discloses processor) to at least: repeatedly input the text to be queried into the target text retrieval model until a subsequent output result of the current output result hits the second target reference text among the plurality of reference texts, in response to a determination that the current output result hits none of the plurality of reference texts (see Figs. 2-3, para [0030], para [0041], discloses planner for semantic processing assembling plan over several iterations, identifying insufficient set of tasks and supplementing a partial plan with additional steps in response to completion of tasks and performing iterations of plan until a fit of the model becomes desirable). Regarding claim 19, Joynt teaches a non-transitory computer-readable storage medium having computer instructions therein, wherein the computer instructions are configured to cause a computer to implement the method of claim 1 (see Fig. 1, para [0021], discloses medium). Regarding claim 20, Joynt teaches a non-transitory computer-readable storage medium having computer instructions therein, wherein the computer instructions are configured to cause a computer to implement the method of claim 8 (see Fig. 1, para [0021], discloses medium). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 5-6, 8-11, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Joynt in view of Johnson et al. (US Patent No. 5,647,337), hereinafter Dolph et al. (US 2025/0173541) (hereinafter Dolph). Regarding claim 1, Joynt teaches a method of training a text retrieval model, comprising: inputting a sample text to be queried into a first text retrieval model to obtain a first output text mark (see Fig. 1, para [0003], para [0015-0016], discloses a complex prompt (sample text) to be queried in a machine learning large language model (first text retrieval model) to obtain tasks (first output text mark) performed sequentially by the large language model, that users planners to semantically decompose complex prompts into multiple steps); wherein the sample text to be queried corresponds to a first target reference text among a plurality of reference texts (see Fig. 2, Fig. 6, para [0016], para [0030, 0032], discloses complex prompt corresponds to actionable task (first target reference text) in a series of separate steps or tasks), the text-mark label is a reference text mark of the first target reference text (see Fig. 2, para [0028], para [0030, 0032], discloses specialist models having training data specific to a particular domain that is relevant to intent, task prompts labeled (reference text mark) of the actionable task, as suited for a particular specialist model), a plurality of reference text marks are determined according to a reference text sequence (see Fig. 6, para [0016], para [0030], para [0051], discloses tasks (reference text marks) for resolution of complex prompt, utilizing a planner assembling a plan over several iterations to complete tasks and semantic analysis of respective domain descriptors (reference text sequence) as related to steps to produce respective model relevance score), and the reference text sequence is determined according to semantics of the plurality of reference texts (see Figs. 5-6, para [0027, 0032], para [0050], discloses semantic analysis determining domain descriptors (reference text sequence) for respective specialist model); and wherein the first text retrieval model is obtained by training an initial text retrieval model using the plurality of reference texts and the plurality of reference text marks corresponding to the plurality of reference texts, so as to output a reference text mark corresponding to a reference text (see Figs. 2-4, para [0030-0032], discloses training a model 210 (initial text retrieval model) in Fig. 2, using tasks and labeled training data that correspond to respective tasks). Joynt does not explicitly teach training the first text retrieval model according to a text-mark label of the sample text to be queried and the first output text mark to obtain a target text retrieval model, the first target reference text is an answer text for the sample text to be queried; and wherein each of the first output text mark and the reference text mark is a string. Dolph teaches training the first text retrieval model according to a text-mark label of the sample text to be queried and the first output text mark to obtain a target text retrieval model (see Fig. 1, para [0058-0060], discloses training large language model according to domain-specific (text-mark label) training data corresponding to a prompt (sample text) and intended output (first output text mark) to obtain an artificial intelligence model (target text retrieval model) that has applied domain knowledge via sidekick AI), the first target reference text is an answer text for the sample text to be queried (see Fig. 1, Figs. 5-6, para [0058], para [0061-0062], discloses answers for prompt); and wherein each of the first output text mark and the reference text mark is a string (see Fig. 1, Figs. 6-7, para [0053-0054], para [0062], para [0098], discloses intended output and domain-specific training data in calculating probabilities for output response (string)). Joynt/Dolph are analogous arts as they are each from the same field of endeavor of database systems. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to modify the system of Joynt to include a string from disclosure of Dolph. The motivation to combine these arts is disclosed by Dolph as “quickly and easily create customized solutions tailored to their specific needs thereby improving their efficiency and productivity” (para [0067]) and including a string is well known to persons of ordinary skill in the art, and therefore one of ordinary skill would have good reason to pursue the known options within his or her technical grasp that would lead to anticipated success. Regarding claim 10, Joynt teaches an electronic device, comprising: at least one processor; and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions, when executed by the at least one processor (see Fig. 1, discloses processor and memory), are configured to cause the at least one processor to at least: input a sample text to be queried into a first text retrieval model to obtain a first output text mark (see Fig. 1, para [0003], para [0015-0016], discloses a complex prompt (sample text) to be queried in a machine learning large language model (first text retrieval model) to obtain tasks (first output text mark) performed sequentially by the large language model, that users planners to semantically decompose complex prompts into multiple steps); wherein the sample text to be queried corresponds to a first target reference text among a plurality of reference texts (see Fig. 2, Fig. 6,para [0016], para [0030, 0032], discloses complex prompt corresponds to actionable task (first target reference text) in a series of separate steps or tasks), the text-mark label is a reference text mark of the first target reference text (see Fig. 2, para [0028], para [0030, 0032], discloses specialist models having training data specific to a particular domain that is relevant to intent, task prompts labeled (reference text mark) of the actionable task, as suited for a particular specialist model), a plurality of reference text marks are determined according to a reference text sequence (see Fig. 6, para [0016], para [0030], para [0051], discloses tasks (reference text marks) for resolution of complex prompt, utilizing a planner assembling a plan over several iterations to complete tasks and semantic analysis of respective domain descriptors (reference text sequence) as related to steps to produce respective model relevance score), and the reference text sequence is determined according to semantics of the plurality of reference texts (see Figs. 5-6, para [0027, 0032], para [0050], discloses semantic analysis determining domain descriptors (reference text sequence) for respective specialist model); and wherein the first text retrieval model is obtained by training an initial text retrieval model using the plurality of reference texts and the plurality of reference text marks corresponding to the plurality of reference texts, so as to output a reference text mark corresponding to a reference text (see Figs. 2-4, para [0030-0032], discloses training a model 210 (initial text retrieval model) in Fig. 2, using tasks and labeled training data that correspond to respective tasks). Joynt does not explicitly teach train the first text retrieval model according to a text-mark label of the sample text to be queried and the first output text mark to obtain a target text retrieval model, the first target reference text is an answer text for the sample text to be queried; and wherein each of the first output text mark and the reference text mark is a string. Dolph teaches training the first text retrieval model according to a text-mark label of the sample text to be queried and the first output text mark to obtain a target text retrieval model (see Fig. 1, para [0058-0060], discloses training large language model according to domain-specific (text-mark label) training data corresponding to a prompt (sample text) and intended output (first output text mark) to obtain an artificial intelligence model (target text retrieval model) that has applied domain knowledge via sidekick AI), the first target reference text is an answer text for the sample text to be queried (see Fig. 1, Figs. 5-6, para [0058], para [0061-0062], discloses answers for prompt); and wherein each of the first output text mark and the reference text mark is a string (see Fig. 1, Figs. 6-7, para [0053-0054], para [0062], para [0098], discloses intended output and domain-specific training data in calculating probabilities for output response (string)). Joynt/Dolph are analogous arts as they are each from the same field of endeavor of database systems. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to modify the system of Joynt to include a string from disclosure of Dolph. The motivation to combine these arts is disclosed by Dolph as “quickly and easily create customized solutions tailored to their specific needs thereby improving their efficiency and productivity” (para [0067]) and including a string is well known to persons of ordinary skill in the art, and therefore one of ordinary skill would have good reason to pursue the known options within his or her technical grasp that would lead to anticipated success. Regarding claims 2 and 11, Joynt/Dolph teaches a method of claim 1 and a device of claim 10. Joynt further teaches wherein the first text retrieval model is obtained by training the initial text retrieval model using the plurality of reference texts and the plurality of reference text marks corresponding to the plurality of reference texts by: inputting the reference text into the initial text retrieval model to obtain a second output text mark of the reference text (see Figs. 4-5, para [0043], para [0046], discloses obtaining second labeled data (second output text mark)); and performing a full fine-tuning on the initial text retrieval model according to the second output text mark and the reference text mark, so as to obtain the first text retrieval model (see Fig. 4, para [0043], para [0080], discloses fine tuning of model according to second label data). Regarding claims 5 and 14, Joynt/Dolph teaches a method of claim 1 and a device of claim 10. Joynt further teaches wherein the sample text to be queried corresponds to a plurality of first target reference texts, the reference text mark comprises a first reference text sub-mark and a second reference text sub-mark, and the plurality of first target reference texts have a same first reference text sub-mark (see para [0015-0016], para [0032], discloses complex prompt separated into respective tasks which have respective prompts labeled (sub-marks) for a particular specialist model). Regarding claims 6 and 15, Joynt/Dolph teaches a method of claim 1 and a device of claim 10. Joynt further teaches wherein the training the first text retrieval model comprises: performing a full fine-tuning on the first text retrieval model (see Fig. 4, para [0043], para [0070-0071], discloses performing fine-tuning on a model). Regarding claim 8, Joynt teaches a method of retrieving a text, comprising: inputting a text to be queried into a target text retrieval model to obtain a current output result (see Figs. 1-2, para [0003], para [0016-0018], discloses a complex queries (text) to be queried in a specialized machine learning large language model (target text retrieval model) to obtain tasks of specialized domains (current output result)), the sample text to be queried corresponds to a first target reference text among a plurality of reference texts (see Fig. 2, Fig. 6,para [0016], para [0030, 0032], discloses complex prompt corresponds to actionable task (first target reference text) in a series of separate steps or tasks), and the text-mark label is a reference text mark of the first target reference text (see Fig. 2, para [0028], para [0030, 0032], discloses specialist models having training data specific to a particular domain that is relevant to intent, task prompts labeled (reference text mark) of the actionable task, as suited for a particular specialist model); and determining a second target reference text among the plurality of reference texts as a retrieval result corresponding to the text to be queried, in response to a determination that the current output result hits the second target reference text (see Figs. 4-5, para [0043], para [0046], discloses determining second tasks corresponding to second labeled data in training machine learning model). Joynt does not explicitly teach wherein the target text retrieval model is obtained by training a first text retrieval model using a sample text to be queried and a text-mark label, the first target reference text is an answer text for the sample text to be queried; and each of the current output result and the reference text mark is a string. Dolph teaches wherein the target text retrieval model is obtained by training a first text retrieval model using a sample text to be queried and a text-mark label (see Fig. 1, para [0058-0060], discloses training large language model (first text retrieval model) according to domain-specific (text-mark label) training data corresponding to a prompt (sample text) and intended output to obtain an artificial intelligence model (target text retrieval model) that has applied domain knowledge via sidekick AI), the first target reference text is an answer text for the sample text to be queried (see Fig. 1, Figs. 5-6, para [0058], para [0061-0062], discloses answers for prompt); and each of the current output result and the reference text mark is a string (see Fig. 1, Figs. 6-7, para [0053-0054], para [0062], para [0098], discloses intended output and domain-specific training data in calculating probabilities for output response (string)). Joynt/Dolph are analogous arts as they are each from the same field of endeavor of database systems. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to modify the system of Joynt to include a string from disclosure of Dolph. The motivation to combine these arts is disclosed by Dolph as “quickly and easily create customized solutions tailored to their specific needs thereby improving their efficiency and productivity” (para [0067]) and including a string is well known to persons of ordinary skill in the art, and therefore one of ordinary skill would have good reason to pursue the known options within his or her technical grasp that would lead to anticipated success. Regarding claim 9, Joynt/Dolph teaches a method of claim 8. Joynt further teaches wherein the target text retrieval model is a large language model (see para [0015], para [0025], discloses large language model), and the instructions are further configured to cause the at least one processor (see Fig.1, discloses processor) to at least: repeatedly input the text to be queried into the target text retrieval model until a subsequent output result of the current output result hits the second target reference text among the plurality of reference texts, in response to a determination that the current output result hits none of the plurality of reference texts (see Figs. 2-3, para [0030], para [0041], discloses planner for semantic processing assembling plan over several iterations, identifying insufficient set of tasks and supplementing a partial plan with additional steps in response to completion of tasks and performing iterations of plan until a fit of the model becomes desirable). Claims 3-4, 7, 12-13, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Joynt in view of Dolph as applied to claims 1 and 10, and in further view of Polatkan et al. (US 2025/0103619) (hereinafter Polatkan). Regarding claims 3 and 12, Joynt/Dolph teaches a method of claim 1 and a device of claim 10. Joynt/Dolph does not explicitly teach wherein the performing a full fine-tuning on the initial text retrieval model according to the second output text mark and the reference text mark comprises: determining a cross-entropy loss according to the second output text mark and the reference text mark; and performing the full fine-tuning on the initial text retrieval model according to the cross-entropy loss. Polatkan teaches wherein the performing a full fine-tuning on the initial text retrieval model according to the second output text mark and the reference text mark comprises: determining a cross-entropy loss according to the second output text mark and the reference text mark (see Fig. 3, para [0111], para [0123], determining cross-entropy loss according to second expert evidence training data corresponding labels); and performing the full fine-tuning on the initial text retrieval model according to the cross-entropy loss (see Fig. 3, para [0123], discloses performing fine-tuning on model according to cross-entropy loss). Joynt/Dolph/Polatkan are analogous arts as they are each from the same field of endeavor of database systems. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to modify the system of Joynt/Dolph to include cross-entropy loss from disclosure of Polatkan. The motivation to combine these arts is disclosed by Polatkan as “improve the reliability and/or accuracy of the levels of expertise represented by the expertise embeddings” (para [0022]) and including cross-entropy loss is well known to persons of ordinary skill in the art, and therefore one of ordinary skill would have good reason to pursue the known options within his or her technical grasp that would lead to anticipated success. Regarding claims 4 and 13, Joynt/Dolph teaches a method of claim 1 and a device of claim 10. Joynt/Dolph does not explicitly teach wherein the plurality of reference texts are from N sets of reference texts, each of the N sets of reference texts comprises one or more reference texts, and N is an integer greater than 1, and wherein the reference text sequence is determined according to the semantics of the plurality of reference texts by: performing clustering on the N sets of reference texts separately to adjust an order of the one or more reference texts in each of the N sets of reference texts, so as to obtain N processed text sets; obtaining a fusion text set according to the N processed text sets; and performing clustering on the fusion text set according to respective semantics of the plurality of reference texts to adjust an order of the plurality of reference texts in the fusion text set, so as to obtain the reference text sequence. Polatkan teaches wherein the plurality of reference texts are from N sets of reference texts, each of the N sets of reference texts comprises one or more reference texts, and N is an integer greater than 1 (see Fig.1, Table 1, para [0034], discloses entity features (N sets of reference texts) and N is a positive integer), and wherein the reference text sequence is determined according to the semantics of the plurality of reference texts by: performing clustering on the N sets of reference texts separately to adjust an order of the one or more reference texts in each of the N sets of reference texts, so as to obtain N processed text sets (see Fig. 1, Table 1, para [0058], para [0067], discloses ranking items of evidence based on source type and obtain source confidence scores for identified sources and rank the items of evidence based on the source confidence scores); obtaining a fusion text set according to the N processed text sets (see Fig. 1, Table 1, para [0067-0068], discloses obtaining entity expert embeddings (fusion text set) based on weights for respective features, such as feature F1 of Table 1); and performing clustering on the fusion text set according to respective semantics of the plurality of reference texts to adjust an order of the plurality of reference texts in the fusion text set, so as to obtain the reference text sequence (see Fig. 1, Fig. 3, para [0067-0069], discloses expertise levels based on source confidence scores associated with expertise embeddings extracted and training). Joynt/Dolph/Polatkan are analogous arts as they are each from the same field of endeavor of database systems. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to modify the system of Joynt/Dolph to include cross-entropy loss from disclosure of Polatkan. The motivation to combine these arts is disclosed by Polatkan as “improve the reliability and/or accuracy of the levels of expertise represented by the expertise embeddings” (para [0022]) and including cross-entropy loss is well known to persons of ordinary skill in the art, and therefore one of ordinary skill would have good reason to pursue the known options within his or her technical grasp that would lead to anticipated success. Regarding claims 7 and 16, Joynt/Dolph teaches a method of claim 1 and a device of claim 10. Joynt further teaches and the first text retrieval model is a large language model (see para [0015], para [0025], discloses large language model). Joynt/Dolph does not explicitly teach wherein the reference text is a rule text. Polatkan teaches wherein the reference text is a rule text (see Fig. 1, para [0042], discloses rule applied in feature extractor). Joynt/Dolph/Polatkan are analogous arts as they are each from the same field of endeavor of database systems. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to modify the system of Joynt/Dolph to include cross-entropy loss from disclosure of Polatkan. The motivation to combine these arts is disclosed by Polatkan as “improve the reliability and/or accuracy of the levels of expertise represented by the expertise embeddings” (para [0022]) and including cross-entropy loss is well known to persons of ordinary skill in the art, and therefore one of ordinary skill would have good reason to pursue the known options within his or her technical grasp that would lead to anticipated success. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY HARMON whose telephone number is (571)270-5861. The examiner can normally be reached M-F 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached at 571-272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Courtney Harmon/Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Sep 17, 2024
Application Filed
Apr 30, 2025
Non-Final Rejection — §102, §103
Aug 04, 2025
Response Filed
Sep 16, 2025
Final Rejection — §102, §103
Nov 12, 2025
Response after Non-Final Action
Dec 15, 2025
Request for Continued Examination
Dec 26, 2025
Response after Non-Final Action
Feb 12, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602439
SEARCH EXPERIENCE MANAGEMENT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12566772
SYSTEMS AND METHODS FOR DATA INGESTION FOR SUPPLY CHAIN OPTIMIZATION
2y 5m to grant Granted Mar 03, 2026
Patent 12561310
METADATA REFRESHMENT FOR A WEB SERVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12547612
ATOMIC AND INCREMENTAL TARGET STATE DEFINITIONS FOR DATABASE ENTITIES
2y 5m to grant Granted Feb 10, 2026
Patent 12536157
REPORT MANAGEMENT SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
72%
With Interview (+10.4%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 425 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month