Prosecution Insights
Last updated: April 19, 2026
Application No. 18/749,305

ANSWER FEEDBACK METHOD AND APPARATUS APPLIED TO LARGE LANGUAGE MODEL

Non-Final OA §103
Filed
Jun 20, 2024
Examiner
AGAHI, DARIOUSH
Art Unit
2656
Tech Center
2600 — Communications
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
142 granted / 166 resolved
+23.5% vs TC avg
Strong +29% interview lift
Without
With
+29.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
193
Total Applications
across all art units

Statute-Specific Performance

§101
25.8%
-14.2% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 166 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to Applicant’s submission filed on 6/20/2024. Claims 1-14, 16, 18-22 are pending in the application of which Claims 1, 8, and 16 are independent and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CN202311714546.9, filed on 12/13/2023. Information Disclosure Statement The information disclosure statement(s)(IDS) submitted on 8/27/2025 has been considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 8 - 10, 16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Malviya et al. (US20230185835A1)(herein " Malviya"), and in further view of Ding et al. (US20200327198A1)(herein "Ding"). Regarding claims 1, 8 and 16 Malviya teaches [An answer feedback method, applied to a large language model, the method comprising: - claim 1], [An answer feedback apparatus, applied to a large language model, the apparatus comprising: at least one processor; and a memory in communication with the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions when executed by the at least one processor cause the at least one processor to perform operations comprising: - claim 8], and [ A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform operations comprising: - claim 16] (Malviya, Par. 0044:” … The responses to the one or more user queries 110 is provided in FIG. 1B by a response generation unit 235 that employs a domain specific advanced language model for question-answers (QA) stored as one or more tools 224 in the memory 206 of the system 102. … Further, based on the generated one or more responses, the feedback unit 236 allows the user to provide a feedback to the one or more responses. For instance, as shown in exemplary environment 100 of FIG. 1A, the user is provided with an option to validate the response provided by the response generation unit 235 or change the response if he/she is not satisfied with the response provided or if the response generation unit was incapable of providing a response.”, and Par. 0023:” … The memory 206 may be communicatively coupled to the processor 204 and the units 208.”, and Par. 0024:” … the processor 204 is configured to fetch and execute computer-readable instructions stored in the memory 206.”, Par. 0048:” … The method 400 may be described in the general context of computer executable instructions. Generally, computer executable instructions may include routines, programs, objects, …”). receiving a question input by a user; (Malviya, Par. 0027:” … The user input further comprises one or more user queries 110 pertaining to the plurality of documents as shown in FIG. 1A and FIG. 1C. For instance, the one or more user queries 110 may comprise questions such as “What is the intent of the document?”, “What is the device used for?”, “What is the nature of participants of the study?” etc.”) generating a candidate answer set of the question by using a pre-trained large language model, and (Malviya, Par. 0020:” … the system also provides responses to the queries submitted by the user for each and every document.”, and Par. 0044:” … The responses to the one or more user queries 110 is provided in FIG. 1B by a response generation unit 235 that employs a domain specific advanced language model [pre-trained LLM] for question-answers (QA) stored as one or more tools 224 in the memory 206 of the system 102.”) selecting an answer from the candidate answer set as a target answer, and displaying the target answer to the user; (Malviya, Par. 0044:” … the system 102 allows a user to view one or more responses to the one or more user queries 110 provided by him/her by selecting a Response tab as shown in. The responses to the one or more user queries 110 is provided in FIG. 1B by a response generation unit 235 …”). in response to receiving a feedback request for the target answer sent by the user: (Malviya, Par. 0044:” … based on the generated one or more responses, the feedback unit 236 allows the user to provide a feedback to the one or more responses.”, and Par. 0056:” At block 414, the method 400 may include seeking user feedback corresponding to accuracy of the one or more responses 222 with respect to the corresponding one or more user-queries 110.”) determining, in response to receiving an update request sent by the user based on the feedback page, (Malviya, Par. 0044:” Further, as shown in FIG. 1C, the system 102 allows a user to view one or more responses to the one or more user queries 110 provided by him/her by selecting a Response tab as shown in. … . Further, based on the generated one or more responses, the feedback unit 236 allows the user to provide a feedback to the one or more responses. “, and Par. 0027:” … For instance, the one or more user queries 110 may comprise questions such as “What is the intent of the document?”, “What is the device used for?”, “What is the nature of participants of the study?” etc.”) Note: In the case where the response 3 (in fig 1.c) is provided, the user provides feedback by selecting “change response” which reads on update the request since user is asking for “change of response”. an answer indicated by the update request from the candidate answer set as a new target answer, and (Malviya, Fig. 1c:” depicted a new target answer which states: “Response 3: Human”). displaying the new target answer to the user. (Malviya, Par. 0044:” … the system 102 allows a user to view one or more responses …”). Malviya does not teach, however Ding teaches generating a feedback page and displaying the feedback page to the user, (Ding, Par. 0029:”… the AOM 180 may be configured to display or otherwise express the RAS in a human-readable natural-language form or other visually intelligible form.”) Note: the ranked answer set RAS reads on feedback page is displayed to the user. wherein content of the feedback page includes the candidate answer set; and (Ding, Par. 0029:”… the AOM 180 may be configured to display or otherwise express the RAS in a human-readable natural-language form or other visually intelligible form.”) Note: the ranked answer set RAS reads on candidate answer. Ding is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Malviya further in view of Ding to generate a feedback page and displaying the feedback page to the user, wherein content of the feedback page includes the candidate answer set. Motivation to do so would generate and provide an estimate of a confidence that the best-candidate answer is correct (Ding, Par. 0028). Regarding claims 2, 9, and 18, Malviya, as modified above, teaches the method, the apparatus, and medium of claims 1, 8, and 16 respectively. Malviya, as modified above, does not teach, however, Ding further teaches wherein the content of the feedback page further comprises a matching degree of each answer in the set of candidate answers to the question, wherein the matching degree is generated by the large language model. (Ding, Par. 0027:” The QA system 120 also includes an answer-scoring module (“ASM”) 160. … a query-term-matching-score module 164 (configured to generate a query term matching candidate-answer score based on how well one or more terms in the input question 108 match to one or more terms in the candidate answers); …”) Note matching score reads on matching degree.”, and Par. 0020:” … employ a machine learning model to aggregate the concepts scores from the different languages to ameliorate language-specific lexical bias, such that the resulting semantic concept score is based more on the underlying semantic concept than lexical or syntactical level matching.”) Note: Malviya teaches LLM:” Par. 0044:” Based on the feedback, the training unit 237, re-trains the advanced language model for question-answer (QA).”) Regarding claims 3, 10, and 19, Malviya, as modified above, teaches the method, the apparatus, and medium of claims 2, 9, and 18 respectively. Malviya, as modified above, does not teach, however, Ding further teaches wherein the content of the feedback page further comprises reference information corresponding to answers in the candidate answer set, wherein the large language model generates the candidate answer set using the reference information. (Ding, Par. 0018:” As referenced herein, a “candidate-answers set” is a set of one or more candidate answers to the input question, an “evidence set” is a set of one or more “evidence passages,” and an “evidence passage” is one or more words, phrases, text passages, and/or other items of information and/or data that provides evidence that supports the veracity of the candidate-answers set.”, and Par. 0019:”… For example, the IBM Watson® QA system employs more than 50 candidate-answer-scoring components that generate candidate-answer scores ranging from formal probabilities to counts to categorical features, based on evidence from different types of sources including unstructured text, semi-structured text, and triple stores. These candidate-answer scorers consider various factors including, but not limited to, the degree of match between a passage's predicate-argument structure (“PAS”) and the question, passage source reliability, geospatial location, temporal relationships, taxonomic classification, the lexical and semantic relations the candidate is known to participate in, the candidate's correlation with question terms, its popularity (or obscurity), and its aliases.”) Claims 4-5, 11 - 12, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Malviya, and Ding, and in further view of Chiocchi et al. (US 11038764 B2)(herein " Chiocchi "). Regarding claims 4, 11 and 20 Malviya, as modified above, teaches the method, the apparatus, and medium of claims 3, 10, and 19 respectively. Malviya, as modified above, does not teach, however, Chiocchi teaches wherein a preset feedback identifier is displayed on a display page to which the target answer belongs; and the feedback request is sent by an interactive operation of the user on the feedback identifier. (Chiocchi, Col. 16, line 52- Col. 17, line 7:” As another specific example, FIG. 4B is a screen capture 400B [display page] of a message communication that may be automatically transmitted to and displayed by a connector node in connector interface 132 when computing system 100 detects activation of … Screen capture 400B [display page] includes a template message that includes text [target answer] 402, 404 and parameter placeholders, such as placeholders 405, 406, 407, 409, 411, 413, 415, 417. … Message 400B also includes interactive elements 408, 410, 412, 414 [preset feedback identifier]. When message 400B is viewed through connector interface 132 by a connector node, one of these interactive elements can be activated to initiate, by the connector node, one or more of the automatic actions described above. For example, activation of element 408 [feedback identifier] may cause computing system 100 to automatically generate a message[feedback] including the text 404 and transmit the message to a target node. “) PNG media_image1.png 852 622 media_image1.png Greyscale Chiocchi is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Malviya, as modified above, further in view of Chiocchi to wherein a preset feedback identifier is displayed on a display page to which the target answer belongs; and the feedback request is sent by an interactive operation of the user on the feedback identifier. Motivation to do so would automatically transmit to and display when computing system detects activation link in the screen (Chiocchi, Col. 16, ll. 53-58). Regarding claims 5, 12 and 21 Malviya, as modified above, teaches the method, the apparatus, and medium of claims 3, 10, and 19 respectively. Malviya, as modified above, does not teach, however, Chiocchi teaches wherein an update identifier and a cancel identifier are displayed on the feedback page; and ( Chiocchi, Col. 16, line 52- Col. 17, line 7:” As another specific example, FIG. 4B is a screen capture 400B [display page] of a message communication that may be automatically transmitted to and displayed by a connector node in connector interface 132 when computing system 100 detects activation of … Screen capture 400B [display page] includes a template message that includes text [target answer] 402, 404 and parameter placeholders, such as placeholders 405, 406, 407, 409, 411, 413, 415, 417. … Message 400B also includes interactive elements 408, 410, 412, 414 [preset feedback identifier]. When message 400B is viewed through connector interface 132 by a connector node, one of these interactive elements can be activated to initiate, by the connector node, one or more of the automatic actions described above. For example, activation of element 408 [feedback identifier] may cause computing system 100 to automatically generate a message[feedback] including the text 404 and transmit the message to a target node.”) Note: Chiocchi shows the medium discussed above. They do not specifically show that an update, cancel identifier. It would be obvious to one of ordinary skill in the art at the time of the invention to add more user inputs as needed because it allows users to interact with question/answer system in compatible ways and is a matter of design choice. the update request is sent by an interactive operation of the user on the update identifier; and (Chiocchi, Col. 16, line 52- Col. 17, line 7:” As another specific example, FIG. 4B is a screen capture 400B [display page] of a message communication that may be automatically transmitted to and displayed by a connector node in connector interface 132 when computing system 100 detects activation of … Screen capture 400B [display page] includes a template message that includes text [target answer] 402, 404 and parameter placeholders, such as placeholders 405, 406, 407, 409, 411, 413, 415, 417. … Message 400B also includes interactive elements 408, 410, 412, 414 [preset feedback identifier]. When message 400B is viewed through connector interface 132 by a connector node, one of these interactive elements can be activated to initiate, by the connector node, one or more of the automatic actions described above. For example, activation of element 408 [feedback identifier] may cause computing system 100 to automatically generate a message[feedback] including the text 404 and transmit the message to a target node.”) Note: Chiocchi shows the medium discussed above. They do not specifically show that an update request, and cancel identifier. It would be obvious to one of ordinary skill in the art at the time of the invention to add more user inputs as needed because it allows users to interact with question/answer system in compatible ways and is a matter of design choice. the method further comprises: returning, in response to receiving a cancel request sent by the user on the feedback page, a display page where the target answer is located, (Chiocchi, Col. 16, line 52- Col. 17, line 7:” As another specific example, FIG. 4B is a screen capture 400B [display page] of a message communication that may be automatically transmitted to and displayed by a connector node in connector interface 132 when computing system 100 detects activation of … Screen capture 400B [display page] includes a template message that includes text [target answer] 402, 404 and parameter placeholders, such as placeholders 405, 406, 407, 409, 411, 413, 415, 417. … Message 400B also includes interactive elements 408, 410, 412, 414 [preset feedback identifier]. When message 400B is viewed through connector interface 132 by a connector node, one of these interactive elements can be activated to initiate, by the connector node, one or more of the automatic actions described above. For example, activation of element 408 [feedback identifier] may cause computing system 100 to automatically generate a message[feedback] including the text 404 and transmit the message to a target node.”) Note: Chiocchi shows the medium discussed above. They do not specifically show that a cancel request. It would be obvious to one of ordinary skill in the art at the time of the invention to add more user inputs as needed because it allows users to interact with question/answer system in compatible ways and is a matter of design choice. wherein the cancel request is sent by an interactive operation of the user on the cancel identifier. (Chiocchi, Col. 16, line 52- Col. 17, line 7:” As another specific example, FIG. 4B is a screen capture 400B [display page] of a message communication that may be automatically transmitted to and displayed by a connector node in connector interface 132 when computing system 100 detects activation of … Screen capture 400B [display page] includes a template message that includes text [target answer] 402, 404 and parameter placeholders, such as placeholders 405, 406, 407, 409, 411, 413, 415, 417. … Message 400B also includes interactive elements 408, 410, 412, 414 [preset feedback identifier]. When message 400B is viewed through connector interface 132 by a connector node, one of these interactive elements can be activated to initiate, by the connector node, one or more of the automatic actions described above. For example, activation of element 408 [feedback identifier] may cause computing system 100 to automatically generate a message[feedback] including the text 404 and transmit the message to a target node.”) Note: Chiocchi shows the medium discussed above. They do not specifically show that a cancel request, and cancel identifier. It would be obvious to one of ordinary skill in the art at the time of the invention to add more user inputs as needed because it allows users to interact with question/answer system in compatible ways and is a matter of design choice. Chiocchi is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Malviya, as modified above, further in view of Chiocchi to wherein an update identifier and a cancel identifier are displayed on the feedback page; and the update request is sent by an interactive operation of the user on the update identifier; and the method further comprises: returning, in response to receiving a cancel request sent by the user on the feedback page, a display page where the target answer is located, wherein the cancel request is sent by an interactive operation of the user on the cancel identifier. Motivation to do so would automatically transmit to and display when computing system detects activation link in the screen (Chiocchi, Col. 16, ll. 53-58). Claims 6, 13, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Malviya, and Ding, and in further view of Steenhoek et al. (US 20250111239 A1)(herein " Steenhoek"). Regarding claims 6, 13 and 22 Malviya, as modified above, teaches the method, the apparatus, and medium of claims 1, 8, and 16 respectively. Malviya, as modified above, does not teach, however, Steenhoek teaches wherein the large language model is trained by: obtaining a pre-trained basic model; (Steenhoek, Par. 0144:” … wherein the large language model was pre-trained to generate source code; …”). performing supervised fine-tuning training on the basic model to obtain a supervised fine-tuning model; (Steenhoek, Par. 0025:” … In supervised learning, a model learns from a training dataset of labeled examples. Each sample in the training dataset contains a correct action that the model should take. The model learns to generalize its actions in order to act in situations not present in the training dataset. “, and Par. 0037:” … As shown in FIG. 1, system 100 includes various training phases: a supervised fine-tuning phase 102, a reward model training phase 104, … In the supervised fine-tuning phase 102, a deep learning model pre-trained to generate source code 108 is fine-tuned, using a fine-tuning engine 112 and a fine-tuning dataset 110, to learn to generate unit test cases. The deep learning model is trained in the supervised fine-tuning phase 102 using a cross-entropy loss objective function.”) obtaining a pre-trained reward model; and (Steenhoek, Par. 0144:” … comprising: obtaining a reward model trained to generate a reward score for a model-generated unit test case for a focal method, wherein the quality score is based on a unit test case having a plurality of static code quality properties, …”, and Par. 0038:” In the reward model training phase 104, a reward model 122 is trained to learn to predict a reward score that indicates the quality of a model-generated test case with respect to the static code quality properties. The reward model 122 is trained on test cases generated by the fine-tuned model, FT-Model 114 given a training sample 116.”) obtaining the large language model through reinforcement learning training based on the supervised fine-tuning model and the reward model. (Steenhoek, Par. 0144:”A computer-implemented method is disclosed, comprising: obtaining a reward model trained to generate a reward score for a model-generated unit test case for a focal method, …; fine-tuning a large language model to learn to generate a unit test case for a given focal method and context of the focal method, wherein the large language model was pre-trained to generate source code; tuning the large language model to learn to generate a unit test case having the plurality of static code quality properties, …, wherein the policy loss is based on a reward having a reward score generated by the reward model for a unit test case generated by the tuned large language model for a given tuning sample; and deploying the tuned large language model in a software development environment to predict a target unit test case for a target focal method.”, and Par. 0025:” Reinforcement learning is a technique that uses a system of rewards and penalties to train a deep learning model to learn to solve a problem by itself.”, and Par. 0037:” … In the supervised fine-tuning phase 102, a deep learning model pre-trained to generate source code 108 is fine-tuned, using a fine-tuning engine 112 and a fine-tuning dataset 110, to learn to generate unit test cases. The deep learning model is trained in the supervised fine-tuning phase 102 using a cross-entropy loss objective function.”) Steenhoek is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Malviya, as modified above, further in view of Steenhoek to obtain a pre-trained basic model; performing supervised fine-tuning training on the basic model to obtain a supervised fine-tuning model; obtaining a pre-trained reward model; and obtaining the large language model through reinforcement learning training based on the supervised fine-tuning model and the reward model. Motivation to do so would improve the model's estimation of the reward value of its own predictions (Steenhoek, Par. 0063). Claims 7, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Malviya, Ding, and Steenhoek and in further view of Gunaselara et al. (US 20210365500 A1)(herein " Gunaselara "). Regarding claims 7 and 14 Malviya, as modified above, teaches the method, and the apparatus of claims 6, and 13 respectively. Malviya, as modified above, does not teach, however, Gunaselara teaches associatively storing the question and the new target answer as supplemental training data; and performing update training on the large language model by using the supplemental training data. (Gunaselara, Par. 0048:” As an exemplary implementation leveraging a bidirectional encoder representation from transformers (BERT) language model [LLM] or alternative language model, the method may be implemented, as shown in FIG. 3, by training a query-content model using a bidirectional encoder representation from transformers (BERT) language model on a set of question-answer pairs stored in a data system (S110); …”). Gunaselara is considered to be analogous to the claimed invention because it is in the same field of endeavor. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Malviya, as modified above, further in view of Gunaselara to associatively storing the question and the new target answer as supplemental training data; and performing update training on the large language model by using the supplemental training data. Motivation to do so would provide significantly more relevant results (Gunaselara, Par. 0051). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Nouri et al. (US 20240038226 A1) teaches Par. 0024:” The language model 130 may be a large language model, including question-answer pair generator 132 and task specific output generator 134.” Examiner's Note: Examiner has cited particular columns and line numbers and/or paragraph numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARIOUSH AGAHI whose telephone number is (408)918-7689. The examiner can normally be reached Monday - Thursday and alternate Fridays, 7:30-4:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached on 571-272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DARIOUSH AGAHI, P.E. Primary Examiner /DARIOUSH AGAHI/Primary Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Jun 20, 2024
Application Filed
Feb 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596890
SYSTEMS AND METHODS FOR CROSS-LINGUAL TRANSFER LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12596876
SYSTEMS AND METHODS FOR IMPROVING TEXTUAL DESCRIPTIONS USING LARGE LANGUAGE MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12591743
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM FOR EXTRACTING A NAMED ENTITY FROM A DOCUMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586586
SPEECH RECOGNITION WITH SELECTIVE USE OF DYNAMIC LANGUAGE MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579448
TECHNIQUES FOR POSITIVE ENTITY AWARE AUGMENTATION USING TWO-STAGE AUGMENTATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+29.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 166 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month