Prosecution Insights
Last updated: April 19, 2026
Application No. 18/627,828

PERFORMANCE EVALUATION OF GENERATIVE QUESTION-ANSWERING SYSTEMS

Final Rejection §103
Filed
Apr 05, 2024
Examiner
PULLIAS, JESSE SCOTT
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
873 granted / 1052 resolved
+21.0% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
47 currently pending
Career history
1099
Total Applications
across all art units

Statute-Specific Performance

§101
15.0%
-25.0% vs TC avg
§103
50.4%
+10.4% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to correspondence 02/18/26 regarding application 18/627,828, in which claims 1, 2, 5, 7, 8, 10, 12, 13, 15-18, and 20 were amended, claim 11 was cancelled, and new claim 21 was added. Claims 1-10 and 12-21 are pending in the application and have been considered. Response to Arguments The examiner agrees with Applicant on page 8 that the amendments to claims 1, 2, 5, 7, 8, 10, 12, 13, 15-18, and 20 and addition of new claim 21 do not introduce new matter. The examiner agrees with Applicant’s arguments on pages 8-11 that Zhang does not disclose a regression model configured to output an evaluation for an input question-answer pair, as recited in the amended independent claims 1, 12, and 17. The 35 U.S.C. 102(a)(2) rejections based on Zhang are withdrawn. On page 10, Applicant argues that Ferrucci does not disclose an "evaluation model" that comprises "a regression model configured to output an evaluation for an input question-answer pair.” The examiner respectfully disagrees. Ferrucci discloses a regression model for assigning answer confidence for QA scoring module, [0028], which evaluates how well the answer answers the question in the QA along the dimension of confidence. Regarding Applicant’s assertion that Ferrucci does not disclose “…where such a model is trained based at least on a prior question-answer pair and an output from a LLM that evaluates the prior question-answer pair”, one cannot show non-obviousness by attacking references individually when the rejection is based upon a combination of references (the amended independent claims are now rejected based on a combination of Zhang and Ferrucci, a new grounds for rejection in response to Applicant’s amendments). Zhang discloses online system provides a batch of evaluation request prompts to machine-learned language model with each input-output pair, i.e. question-answer pair, [0143], which is an LLM, [0034], and generates an evaluation label, [0143], which may be a generated score, [0072]. Zhang further discloses online system generates training samples of labeled historical input-output, i.e. scored question-answer pairs for the classification model and trains classification model using a supervised training technique, [0143], classification model trained by comparing the classification score to the label score with a loss function, [0067], [0072]. Therefore, in Zhang, the model is trained based at least on a prior question-answer pair and an output from a LLM that evaluates the prior question-answer pair. Applicant’s arguments on page 10-12 are similar to those addressed above, and are not persuasive for similar reasons. Claim Objections In claim 21, line 5, should “select” be “selecting”? Claim Interpretation Claims 17-21 are directed to a ”computer-readable storage medium”. The specification explicitly states that this term does not encompass communication media, propagating signals, and signals per se (paragraph [0146], page 45). Claims 17-20 are interpreted as only encompassing eligible storage medium types. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-9, 12, 15, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 20250200356) in view of Ferrucci et al. (US 20130035931). Consider claim 1, Zhang discloses a system for evaluating the performance of a question-answering model (machine learning model that evaluates results of a classification model by generating a score evaluating an input-output pair of a question-answering task, [0070-0072], [0039]), the system comprising: a processor (computer processor, [0171]); and a memory device that stores program code structured to cause the processor to (a non-transitory, tangible, computer-readable medium stores instructions executed by a processor, [0171]-[0172]; “memory” is inherent in this computer architecture): obtain a set of prior question-answer pair comprising a question and an associated answer (online concierge system 140 performs question-answering using model serving system based on knowledge sources, [0039], the online system stores these historical inputs and outputs of the classification model, [0143], i.e. question answer pairs from the question-answering task, [0039]); receive an evaluation score for the prior question-answer pair from a large language model (LLM) (online system provides a batch of evaluation request prompts to machine-learned language model with each input-output pair, i.e. question-answer pair, [0143], which is an LLM, [0034], and generates an evaluation label, [0143], which may be a generated score, [0072]); train an evaluation model based on features that comprise information from the prior question-answer pair and labels based on the evaluation score for the prior question-answer pair (online system generates training samples of labeled historical input-output, i.e. scored question-answer pairs for the classification model and trains classification model using a supervised training technique, [0143], classification model trained by comparing the classification score to the label score with a loss function, [0067], [0072]); obtain a current question-answer pair (online concierge system receives an input query provided by a user, [0145], and provides an answer in the form of images, descriptions, as an input-output pair, [0146]; although this context refers to the input-output pair as a query-item pair in this context, for a question-answering task, it would be a question-answer pair, [0039]); and generate a current evaluation score for the current question-answer pair by applying the current question-answer pair to the evaluation model (query-item pairs, which in the context of question-answering are question-answer pairs, see above, are supplied to machine-learned language model to generate evaluation results, such as an evaluation result category, [0148-0149]; machine-learned language model is the classification model and generates evaluation scores instead of or in addition to labels, [0071-0072]). Zhang does not specifically mention the evaluation model comprising a regression model configured to output an evaluation for an input question-answer pair. Ferrucci discloses an evaluation model comprising a regression model configured to output an evaluation for an input question-answer pair (regression model for assigning answer confidence for QA scoring module, [0028]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang such that the evaluation model is a regression model in order to conserve computing resources, as suggested by Ferrucci ([0028]), while predictably improving accuracy, as suggested by Ferrucci ([0030]). The references cited are analogous art in the same field of natural language processing. Consider claim 12, Zhang discloses a method for evaluating the performance of a question-answering model (evaluating results of a classification model by generating a score evaluating an input-output pair of a question-answering task, [0070-0072], [0039]), comprising: obtaining a prior question-answer pair comprising a question and an associated answer (online concierge system 140 performs question-answering using model serving system based on knowledge sources, [0039], the online system stores these historical inputs and outputs of the classification model, [0143], i.e. question answer pairs from the question-answering task, [0039]); receive an evaluation score for the prior question-answer pair from a large language model (LLM) (online system provides a batch of evaluation request prompts to machine-learned language model with each input-output pair, i.e. question-answer pair, [0143], which is an LLM, [0034], and generates an evaluation label, [0143], which may be a generated score, [0072]); training an evaluation model based on features that comprise information from the prior question-answer pair and labels based on the evaluation score for the prior question-answer pair (online system generates training samples of labeled historical input-output, i.e. scored question-answer pairs for the classification model and trains classification model using a supervised training technique, [0143], classification model trained by comparing the classification score to the label score with a loss function, [0067], [0072]); obtaining a current question-answer pair (online concierge system receives an input query provided by a user, [0145], and provides an answer in the form of images, descriptions, as an input-output pair, [0146]; although this context refers to the input-output pair as a query-item pair in this context, for a question-answering task, it would be a question-answer pair, [0039]); and applying the current question-answer pair to the evaluation model, resulting in a current evaluation score for the current question-answer pair (query-item pairs, which in the context of question-answering are question-answer pairs, see above, are supplied to machine-learned language model to generate evaluation results, such as an evaluation result category, [0148-0149]; machine-learned language model is the classification model and generates evaluation scores instead of or in addition to labels, [0071-0072]). Zhang does not specifically mention the evaluation model comprising a regression model configured to output an evaluation for an input question-answer pair. Ferrucci discloses an evaluation model comprising a regression model configured to output an evaluation for an input question-answer pair (regression model for assigning answer confidence for QA scoring module, [0028]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang such that the evaluation model is a regression model for reasons similar to those for claim 1. Consider claim 17, Zhang discloses a computer-readable storage medium having computer program code recorded thereon that when executed by at least one processor causes the at least one processor to perform a method (non-transitory computer readable medium storing code executed by a processor, [0171], [0172]) comprising: obtaining a prior question-answer pair comprising a question and an associated answer (online concierge system 140 performs question-answering using model serving system based on knowledge sources, [0039], the online system stores these historical inputs and outputs of the classification model, [0143], i.e. question answer pairs from the question-answering task, [0039]); receive an evaluation score for the prior question-answer pair from a large language model (LLM) (online system provides a batch of evaluation request prompts to machine-learned language model with each input-output pair, i.e. question-answer pair, [0143], which is an LLM, [0034], and generates an evaluation label, [0143], which may be a generated score, [0072]); training an evaluation model based on features that comprise information from the prior question-answer pair and labels based on the evaluation score for the prior question-answer pair (online system generates training samples of labeled historical input-output, i.e. scored question-answer pairs for the classification model and trains classification model using a supervised training technique, [0143], classification model trained by comparing the classification score to the label score with a loss function, [0067], [0072]); obtaining a current question-answer pair (online concierge system receives an input query provided by a user, [0145], and provides an answer in the form of images, descriptions, as an input-output pair, [0146]; although this context refers to the input-output pair as a query-item pair in this context, for a question-answering task, it would be a question-answer pair, [0039]); and applying the current question-answer pair to the evaluation model, resulting in a current evaluation score for the current question-answer pair (query-item pairs, which in the context of question-answering are question-answer pairs, see above, are supplied to machine-learned language model to generate evaluation results, such as an evaluation result category, [0148-0149]; machine-learned language model is the classification model and generates evaluation scores instead of or in addition to labels, [0071-0072]). Zhang does not specifically mention the evaluation model comprising a regression model configured to output an evaluation for an input question-answer pair. Ferrucci discloses an evaluation model comprising a regression model configured to output an evaluation for an input question-answer pair (regression model for assigning answer confidence for QA scoring module, [0028]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang such that the evaluation model is a regression model for reasons similar to those for claim 1. Consider claim 3, Zhang discloses the current question-answer pair comprises a current question and a current answer, the current question provided to a trained model and the current answer returned by the trained model (online concierge system receives an input query provided by a user, [0145], and provides an answer in the form of images, descriptions, as an input-output pair, [0146]; although this context refers to the input-output pair as a query-item pair in this context, for a question-answering task, it would be a question-answer pair, [0039]; query-item pairs, which in the context of question-answering are question-answer pairs, see above, are supplied to machine-learned language model to generate evaluation results, such as an evaluation result category, [0148-0149]; machine-learned language model is the classification model and generates evaluation scores instead of or in addition to labels, [0071-0072]). Consider claim 4, Zhang discloses the current evaluation score is indicative of a quality of the current answer to the current question (instruction prompt instructs machine learned language model to evaluate the quality of the result relative to the query, [0075]; for a question-answering task, this would be quality of the answer to the question in the question-answer pair, [0039]). Consider claim 5, Zhang discloses the question of the prior question-answer pair was provided to a trained model, and the associated answer was returned by the trained model (classification model is a machine-learned model that provides an answer to a question while performing question answering task, [0071], [0039]). Consider claim 6, Zhang discloses the trained model is a different model than the LLM (classification model is another machine-learned model that is trained, [0071]). Consider claim 7, Zhang discloses the program code is structured to cause the processor to receive the evaluation score for the prior question-answer pair by: generating a prompt that includes the prior question and prior answer of the prior question-answer pair to the LLM (instruction prompt instructs machine learned language model to evaluate the quality of the result relative to the query, [0075]; for a question-answering task, this would be quality of the answer to the question in the question-answer pair, [0039]). Consider claim 8, Zhang discloses the program code is further structured to cause the processor to perform an action in response to applying the current question answer pair to the evaluation model, the action comprising at least one of: providing an indication relating to a quality of the current answer to a trained model that generated the current answer (the evaluation label is stored with historical input-output pair and provided to classification model for supervised training, [0143]); or providing an indication relating to the quality of the current answer to a planner of a question-answering system that selected the trained model to generate the current answer (noting the claim language “at least one of” only requires this limitation in the alternative; since Zhang discloses the other alternative, the claim as a whole is considered anticipated). Consider claim 9, Zhang discloses the program code is further structured to cause the processor to: provide, to a user interface, a rating based on the current evaluation score and the current answer (the evaluation label and reasoning are output, [0096], in this instance, a rating of exact match, [0102], displayed to the user, [0075]). Consider claim 15, Zhang discloses performing an action in response to applying the current question answer pair to the evaluation model, the action comprising at least one of: providing an indication relating to a quality of the current answer to a trained model that generated the current answer (the evaluation label is stored with historical input-output pair and provided to classification model for supervised training, [0143]); or providing an indication relating to the quality of the current answer to a planner of a question-answering system that selected the trained model to generate the current answer (noting the claim language “at least one of” only requires this limitation in the alternative; since Zhang discloses the other alternative, the claim as a whole is considered anticipated). Consider claim 20, Zhang discloses performing an action in response to applying the current question answer pair to the evaluation model, the action comprising at least one of: providing an indication relating to a quality of the current answer to a trained model that generated the current answer (the evaluation label is stored with historical input-output pair and provided to classification model for supervised training, [0143]); or providing an indication relating to the quality of the current answer to a planner of a question-answering system that selected the trained model to generate the current answer (noting the claim language “at least one of” only requires this limitation in the alternative; since Zhang discloses the other alternative, the claim as a whole is considered anticipated). Claims 2, 13, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 20250200356) in view of Ferrucci et al. (US 20130035931), in further view of Jiang et al. (“LLM-BLENDER: Ensembling Large Language Models with Pairwise Ranking and Generative Fusion” arXiv:2306.02561v3 [cs.CL] 30 Jun 2023). Consider claim 2, Zhang discloses the program code is further structured to cause the processor to: provide each prior question-answer pair of the set to a plurality of LLMs, each LLM returning a respective evaluation score for the prior question-answer pair (“the language models are LLMs”, [0034], [0064]; online system provides a batch of evaluation request prompts to machine-learned language models with each input-output pair, i.e. question-answer pair, [0143], which are multiple LLMs, [0034], and generates an evaluation label, [0143], which may be a generated score, [0072]); and train the evaluation model based on a score for each prior question-answer pair (online system generates training samples of labeled historical input-output, i.e. scored question-answer pairs for the classification model and trains classification model using a supervised training technique, [0143], classification model trained by comparing the classification score to the label score with a loss function, [0067], [0072]). Zhang and Ferrucci do not specifically mention a combination of the evaluation scores. Jiang discloses a combination of the evaluation scores (aggregation of scores, Section 3.3, pages 5-6). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang and Ferrucci by utilizing a combination of the evaluation scores in order to address the known strengths and weaknesses of different LLMs, as suggested by Jiang (Section 1, page 1). Doing so would have led to predictable results of alleviating biases, errors, and uncertainties in individual LLMs, as suggested by Jiang (Section 1, page 2). The references cited are analogous art in the same field of natural language processing. Consider claim 13, Zhang discloses providing each prior question-answer pair of the set to a plurality of LLMs, each LLM returning a respective evaluation score for the prior question-answer pair (“the language models are LLMs”, [0034], [0064]; online system provides a batch of evaluation request prompts to machine-learned language models with each input-output pair, i.e. question-answer pair, [0143], which are multiple LLMs, [0034], and generates an evaluation label, [0143], which may be a generated score, [0072]); and training the evaluation model based on a score for each prior question-answer pair (online system generates training samples of labeled historical input-output, i.e. scored question-answer pairs for the classification model and trains classification model using a supervised training technique, [0143], classification model trained by comparing the classification score to the label score with a loss function, [0067], [0072]). Zhang and Ferrucci do not specifically mention a combination of the evaluation scores. Jiang discloses a combination of the evaluation scores (aggregation of scores, Section 3.3, pages 5-6). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang and Ferrucci by utilizing a combination of the evaluation scores for reasons similar to those for claim 2. Consider claim 18, Zhang discloses providing each prior question-answer pair of the set to a plurality of LLMs, each LLM returning a respective evaluation score for the prior question-answer pair (“the language models are LLMs”, [0034], [0064]; online system provides a batch of evaluation request prompts to machine-learned language models with each input-output pair, i.e. question-answer pair, [0143], which are multiple LLMs, [0034], and generates an evaluation label, [0143], which may be a generated score, [0072]); and training the evaluation model based on a score for each prior question-answer pair (online system generates training samples of labeled historical input-output, i.e. scored question-answer pairs for the classification model and trains classification model using a supervised training technique, [0143], classification model trained by comparing the classification score to the label score with a loss function, [0067], [0072]). Zhang and Ferrucci do not specifically mention a combination of the evaluation scores. Jiang discloses a combination of the evaluation scores (aggregation of scores, Section 3.3, pages 5-6). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang and Ferrucci by utilizing a combination of the evaluation scores for reasons similar to those for claim 2. Claims 10, 16, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 20250200356) in view of Ferrucci et al. (US 20130035931), in further view of Chopra et al. (US 10445745). Consider claim 10, Zhang and Ferrucci do not, but Chopra discloses the program code is further structured to cause the processor (program code, Col 15 lines 54-67) to: obtain a chat history that identifies a conversation between a user and a question-answering system (questions from chat a historical questions asked on the platform, Col 3 lines 3-9); and select the prior question-answer pair from the conversation based on a filtering criteria (highest scoring question-answer pairs, Col 13-14 lines 51-5). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang and Ferrucci by obtaining a chat history that identifies a conversation between a user and a question-answering system and selecting the prior question-answer pair from the conversations based on a filtering criteria in order to make question answering more precise and less cumbersome to use, as suggested by Chopra, (Col 1 lines 39-40), predictably addressing customer queries more efficiently, as suggested by Chopra, (Col 1 lines 11-12). The references cited are analogous art in the same field of natural language processing. Consider claim 16, Zhang and Ferrucci do not, but Chopra discloses obtaining a chat history that identifies a conversation between a user and a question-answering system (questions from chat a historical questions asked on the platform, Col 3 lines 3-9); and selecting the prior question-answer pair from the conversations based on a filtering criteria (highest scoring question-answer pairs, Col 13-14 lines 51-5). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang and Ferrucci by obtaining a chat history that identifies a conversation between a user and a question-answering system and selecting the prior question-answer pairs from the conversations based on a filtering criteria for reasons similar to those for claim 10. Consider claim 21, Zhang and Ferrucci do not, but Chopra discloses obtaining a chat history that identifies a conversation between a user and a question-answering system (questions from chat a historical questions asked on the platform, Col 3 lines 3-9); and selecting the prior question-answer pair from the conversations based on a filtering criteria (highest scoring question-answer pairs, Col 13-14 lines 51-5). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang and Ferrucci by obtaining a chat history that identifies a conversation between a user and a question-answering system and selecting the prior question-answer pairs from the conversations based on a filtering criteria for reasons similar to those for claim 10. Claims 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 20250200356) in view of Ferrucci et al. (US 20130035931), in further view of Barbetta et al. (US 20150161106). Consider claim 14, Zhang and Ferrucci do not, but Barbetta discloses the current evaluation score is indicative of a quality of a current question of the current question-answer pair to a current answer of the current question-answer pair (coercion analyzer generates a score that reflects ease of coercion of the answer to the type of question that the question seeks, [0049], indicative of question quality in the question-answer pair, [0035]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang and Ferrucci such that the current evaluation score is indicative of a quality of a current question of the current question-answer pair to a current answer of the current question-answer pair in order to reduce errors in the question-answer sets, as suggested by Barbetta ([0003]), predictably improving accuracy, as suggested by Barbetta ([0001]). The references cited are analogous art in the same field of natural language processing. Consider claim 19, Zhang and Ferrucci do not, but Barbetta discloses the current evaluation score is indicative of a quality of a current question of the current question-answer pair to a current answer of the current question-answer pair (coercion analyzer generates a score that reflects ease of coercion of the answer to the type of question that the question seeks, [0049], indicative of question quality in the question-answer pair, [0035]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zhang and Ferrucci such that the current evaluation score is indicative of a quality of a current question of the current question-answer pair to a current answer of the current question-answer pair for reasons similar to those for claim 14. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: -Rajagopal et al. (US 20200227026) discloses obtaining a chat history that identifies a conversation between a user and a question-answering system (historical data of previous conversations between users and the QA chatbot, [0027], [0050]); and select the prior question-answer pair from the conversation based on a filtering criteria (selecting historical QNA pairs filtered by topic, [0027], [0050]). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jesse Pullias whose telephone number is 571/270-5135. The examiner can normally be reached on M-F 8:00 AM - 4:30 PM. The examiner’s fax number is 571/270-6135. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Andrew Flanders can be reached on 571/272-7516. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jesse S Pullias/ Primary Examiner, Art Unit 2655 03/12/26
Read full office action

Prosecution Timeline

Apr 05, 2024
Application Filed
Nov 10, 2025
Non-Final Rejection — §103
Feb 18, 2026
Response Filed
Mar 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596885
Automatically Labeling Items using a Machine-Trained Language Model
2y 5m to grant Granted Apr 07, 2026
Patent 12573378
SPEECH TENDENCY CLASSIFICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12572740
MULTI-LANGUAGE DOCUMENT FIELD EXTRACTION
2y 5m to grant Granted Mar 10, 2026
Patent 12566929
COMBINING DATA SELECTION AND REWARD FUNCTIONS FOR TUNING LARGE LANGUAGE MODELS USING REINFORCEMENT LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12536389
TRANSLATION SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+13.0%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month