Prosecution Insights
Last updated: April 18, 2026
Application No. 18/142,643

AUTOMATIC TEXTUAL DOCUMENT EVALUATION USING LARGE LANGUAGE MODELS

Non-Final OA §101§103
Filed
May 03, 2023
Examiner
KASSIM, IMAD MUTEE
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation Of America
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
116 granted / 160 resolved
+17.5% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
23 currently pending
Career history
183
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 160 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). With respect to claim 1. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1 recites a method, which is a process. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations identified below each, under its broadest reasonable interpretation, covers mental processes abstract idea grouping (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)), see MPEP 2106.04(a)(2), subsection III and the 2019 PEG, but for the recitation of generic computer components: “processing the textual content to generate a plurality of close ended queries; generating a plurality of inference values pertaining to the users, each by feeding one of the plurality of close ended queries to at least one conversational language model; processing the plurality of inference to generate at least one evaluation of the textual content; and using the at least one evaluation to generate at least one second prompt”: (Mental processes- concept of observation and evaluation of labeling datasets). Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application. “receiving a textual content generated by group of users in response to an assignment;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). The generic computer components in these steps are recited at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—there are no additional limitations beyond the mental processes identified above. The limitation treated above, are directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory. See MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). It also includes limitations that Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The additional element is insignificant application, which is similar to examples of activities that the courts have found to be insignificant extra-solution activity, in accordance with MPEP 2106.05(g), Insignificant Extra-Solution Activity. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible. Claim 2. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “using the at least one second prompt for querying the group of users for an additional textual content”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 3. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “wherein the at least one evaluation is based on relevance of the additional textual content to the at least one second prompt”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets and filtering the labeling between positive and negative sampling based on relevancy. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 4. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “wherein the at least one second prompt comprising a suggestion derived from the at least one evaluation”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: The claim recites that “obtaining a set of the potential queries;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 5. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “wherein the at least one evaluation is based on checking when a specified suggestion is present in the textual content”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 6. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “wherein the at least one evaluation is counting how many users in the group of users included the specified suggestion in the textual content pertaining thereto”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 7. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “wherein the suggestion is associated with an item pertaining to the assignment”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 8. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “wherein the at least one evaluation is based on evaluating a coherence measure of the textual content”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 9. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “wherein the evaluation comprising detection of suggestions not expected in response to the querying”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 10. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “using the evaluation to partition a group of students according to a similarity measure”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 11. Step 1: A method, as above. Step 2A Prong 1: The claim recites that “generating a plurality of queries each from a combination of at least one part of the textual content pertaining to users from the group and one of the plurality of close ended questions”: This limitation merely specifies mental processes- concept of observation and evaluation of labeling datasets. Step 2A Prong 2, Step 2B: The claim recites that “wherein processing the textual content comprising: accessing a storage to obtain a plurality of close ended questions.” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claims 12-22 Step 1: The claims recite a system; therefore, they fall into the statutory category of machines. Step 2A Prong 1: The claims 12-22 recite the same mental processes as claims 1-11, respectively. Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claims 12-22 recite generic computer components, namely “system comprising a storage and at least one processing circuitry”. As before, the mere recitation that the method is to be performed on a generic computer amounts to a mere instruction to apply the exception on the computer. See MPEP § 2106.05(f). With that exception, the analysis mirrors that of claims 1-11, respectively. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The analysis, with the one exception noted above, mirrors that of claims 1-11, respectively. Claim 23 Step 1: The claims recite a system; therefore, they fall into the statutory category of machines. Step 2A Prong 1: The claim 23 recite the same mental processes as claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claim 23 recite generic computer components, namely “one or more computer program products comprising… one or more processors of a computing system is to cause a computing system”. As before, the mere recitation that the method is to be performed on a generic computer amounts to a mere instruction to apply the exception on the computer. See MPEP § 2106.05(f). With that exception, the analysis mirrors that of claim 1. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The analysis, with the one exception noted above, mirrors that of claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wong et al. (US 20180061264 A1) in view of Attali et al. (US 20230080674 A1). Regarding claim 1. Wong teaches a method for evaluating understanding in a textual content, comprising: receiving a textual content generated by group of users in response to an assignment (see ¶ 19, “The initial users provide answers to the open ended question as an initial set of textual responses at 32 in FIG. 2. After collecting some predetermined number of textual responses, the system automatically uses these textual responses to populate a set of possible textual responses and stores them in the storage 16 of FIG. 1.”); processing the textual content to generate a plurality of close ended queries (see ¶ 1, “automatically evolves open-ended questions into closed-ended questions ranked by numerical ratings.”, also see ¶ 15, “The survey should evolve into including closed-ended questions replacing the original open-ended question while some percentage of available respondents still have not answered the survey.”, also see ¶ 29, “the system must receive a first set of surveys used to gather potential textual responses, send out the potential textual responses for evaluation, receive the evaluations and then adjust the survey to change the original open-ended question into a closed-ended question…in order to provide some group of the potential respondents the ‘final’ version of the survey with the closed-ended question, this process has to occur in real-time, meaning while access to the survey is live and before the predetermined total number of responses has been received.”); processing the plurality of inference to generate at least one evaluation of the textual content (see ¶ 27, “if the answer evaluations of the ‘leisure time’ textual response of FIG. 5 is 1 “not insightful” with a confidence measure of 95%, that textual response can be ignored, as the system has a high confidence that the textual response is not insightful. FIG. 8 shows an example of an insightfulness rating distribution.”, also see ¶ 28, “The combination of the confidence measure and pruning of textual responses allows the self-learning survey to efficiently gain information per survey participant and to avoid spending time and attention on textual responses that are not promising.”, also see claim 1, “generate a confidence measure for each possible textual response based upon the participant numerical ratings; generating a ranked list using the confidence measure, with the processor”); and using the at least one evaluation to generate at least one second prompt (see ¶ 31, “The ‘top textual responses’ are those that have a high numerical rating, such as 3 or 4 in the previous example. These are then used to generate a new version of the survey with the open-ended question converted into close-ended questions as shown at 36 in FIG. 2. These new versions of the survey are then sent out to the remaining users. The converted questions and numerical ratings are then stored in the storage 16 of FIG. 1.”). Wong do not specifically teach generating a plurality of inference values pertaining to the users, each by feeding one of the plurality of close ended queries to at least one conversational language model. Attali teach generating a plurality of inference values pertaining to the users, each by feeding one of the plurality of close ended queries to at least one conversational language model (see ¶ 10, “using a Transformer-based Language Model (for example, Generative Pretrained Transformer 3 or GPT-3) to automatically generate a set of content-controlled passages, and test items for each passage that can be used to evaluate a test taker's comprehension of the text.”, also see ¶ 30, “Based on the selected Source Passage(s), for each such passage, generate a set of possible correct responses to one or more multiple-choice questions using a Transformer-Based Language Model (or Models) by providing”, also see ¶ 141, “Generate a set of possible answers to the item using the Instructions and Examples defined in step 1 and the Source Passage as the Conditioning for the Transformer-Based Language Model.”, also see ¶ 195, also see ¶ 182, “Correct Candidates can be ranked based on the values of one or more of the discussed metrics. The ranking method or formula used may vary from item to item but will typically consider the probability of the candidate being generated by the Transformer-based Language Model and its similarity to the passage. The top-ranked candidate may then be selected as the correct answer for an item. Alternatively, for items in which multiple correct answers should be selected from a set of options, the top N candidates may be selected.”). Both Wong and Attali pertain to the problem of textual responses analysis, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Wong and Attali to teach the above limitations. The motivation for doing so would be “using a Transformer-based Language Model (for example, Generative Pretrained Transformer 3 or GPT-3) to automatically generate a set of content-controlled passages, and test items for each passage that can be used to evaluate a test taker's comprehension of the text. These test questions or items may be used to probe a test taker's comprehension of the text and their ability to synthesize and distill the information beyond simple vocabulary or standard questions (what, where, who, or why, as examples) that are generated using templates or conventional automated methods. Further, the test items are generated in a way that enables them to be automatically scored, thereby reducing, or eliminating the need for human resources as part of the test generation and scoring processes.” (see Attali ¶ 10). Regarding claim 2. Wong and Attali teaches the method of claim 1, Wong further teaches further comprising using the at least one second prompt for querying the group of users for an additional textual content (see ¶ 31, “These are then used to generate a new version of the survey with the open-ended question converted into close-ended questions as shown at 36 in FIG. 2. These new versions of the survey are then sent out to the remaining users. The converted questions and numerical ratings are then stored in the storage 16 of FIG. 1.”). Regarding claim 3. Wong and Attali teaches the method of claim 2, Attali further teaches wherein the at least one evaluation is based on relevance of the additional textual content to the at least one second prompt (see ¶ 165, “There are multiple methods of computing textual similarity, such as by generating word embeddings (vectors) and comparing two vectors using a defined metric (Euclidean distance or cosine distance, as non-limiting examples). The threshold value used to determine a situation of too great or too little similarity can be tuned based on internally evaluated metrics derived from generated answers and items, as well as by the performance of test takers on generated items and answers. Other approaches to determining textual similarity include evaluating semantic similarity, contextual evaluation, or another suitable approach.”, also see ¶¶ 39, “Filter the generated test items (the possible correct and/or incorrect responses) to remove categories of subject matter that are not appropriate or do not satisfy one or more desired metrics; [0040] For possible correct responses, non-limiting examples of such metrics include: [0041] similarity to the source text as estimated by vector similarities of the text encoded by a separate language model; [0042] similarity to individual sentences within the source text via the above mechanism; [0043] N-gram overlap with the source text; [0044] probability of the generated answer under the transformer-based language model; [0045] probability of being correct as estimated by a separately trained question answering model; [0046] length; or [0047] presence of overly rare words; [0048] For possible incorrect responses, non-limiting examples of such metrics include: [0049] similarity to the source text; [0050] similarity to the chosen correct answer; [0051] similarity to unchosen potential correct answers; [0052] similarity to other incorrect answers”). The motivation utilized in the combination of claim 1, super, applies equally as well to claim 3. Regarding claim 4. Wong and Attali teaches the method of claim 2, Wong further teaches wherein the at least one second prompt comprising a suggestion derived from the at least one evaluation (see ¶ 20, “FIG. 5, four possible ratings for the textual response are provided, with a fifth choice of 0 for ‘not relevant.’ The user selects one of five options as to the insightfulness of each of the suggested textual responses. The user's numerical ratings are received as values 0-4, with 0 being “not relevant”, 1 being ‘not insightful,’ and 4 being ‘very insightful.’”, also see ¶ 30, “if the administrator is satisfied with the insights gathered, he or she can eliminate the open-ended question to allow subsequent versions of the survey presented to focus on obtaining numerical ratings of the top textual responses.”). Regarding claim 5. Wong and Attali teaches the method of claim 1, Wong further teaches wherein the at least one evaluation is based on checking when a specified suggestion is present in the textual content (see ¶ 20, “FIG. 5, four possible ratings for the textual response are provided, with a fifth choice of 0 for ‘not relevant.’ The user selects one of five options as to the insightfulness of each of the suggested textual responses. The user's numerical ratings are received as values 0-4, with 0 being “not relevant”, 1 being ‘not insightful,’ and 4 being ‘very insightful.’”, also see ¶ 30, “if the administrator is satisfied with the insights gathered, he or she can eliminate the open-ended question to allow subsequent versions of the survey presented to focus on obtaining numerical ratings of the top textual responses.”). Regarding claim 6. Wong and Attali teaches the method of claim 5, Wong further teaches wherein the at least one evaluation is counting how many users in the group of users included the specified suggestion in the textual content pertaining thereto (see ¶ 16, “A closed-ended question is one in which the user selects from a fixed set of choices. An evaluation or rating, also referred to as a numerical rating, is a measure of the users' opinion as to the value or insight of an item. The item may be a title or description of a book, movie, or a textual response as defined above.”, also see ¶ 20, “FIG. 5, four possible ratings for the textual response are provided, with a fifth choice of 0 for ‘not relevant.’ The user selects one of five options as to the insightfulness of each of the suggested textual responses. The user's numerical ratings are received as values 0-4, with 0 being “not relevant”, 1 being ‘not insightful,’ and 4 being ‘very insightful.’”, also see ¶ 25, “textual response R2 is given a numerical rating score of 1 by two participants and 4 by two other participants, its SE will be greater than 0, specifically around 0.866 or low confidence, giving it a better chance to be selected for future versions of the survey by the sampling algorithm. Intuitively, confidence based sampling allows the self-learning survey to learn as much as it can about the true value of each textual response per participant evaluation. FIG. 6 shows the possible textual responses and their average numerical ratings.”). Regarding claim 7. Wong and Attali teaches the method of claim 5, Wong further teaches wherein the suggestion is associated with an item pertaining to the assignment (see ¶ 31, “The ‘top textual responses’ are those that have a high numerical rating, such as 3 or 4 in the previous example. These are then used to generate a new version of the survey with the open-ended question converted into close-ended questions as shown at 36 in FIG. 2. These new versions of the survey are then sent out to the remaining users. The converted questions and numerical ratings are then stored in the storage 16 of FIG. 1.”). Regarding claim 8. Wong and Attali teaches the method of claim 1, Attali further teaches wherein the at least one evaluation is based on evaluating a coherence measure of the textual content (see ¶ 181, “a trained model that tries to evaluate the quality of a summary of a piece of text, and which could be used in the ranking equation. This could be used to help rank potential correct candidates based on their estimated “quality”.”). The motivation utilized in the combination of claim 1, super, applies equally as well to claim 8. Regarding claim 9. Wong and Attali teaches the method of claim 1, Attali further teaches wherein the evaluation comprising detection of suggestions not expected in response to the querying (see ¶ 36, “Using the set of Alternative Passages generated or corresponding to each selected Source Passage, generate a set of possible incorrect responses to each of the multiple-choice questions referred to in the previous step using a Transformer-based Language Model by providing: [0037] The same Instructions and Examples used to generate the possible correct response(s) for the selected Source Passage; [0038] The Alternative Passage for which answers will be generated as the Conditioning; [0039] Filter the generated test items (the possible correct and/or incorrect responses) to remove categories of subject matter that are not appropriate or do not satisfy one or more desired metrics;”). The motivation utilized in the combination of claim 1, super, applies equally as well to claim 9. Regarding claim 10. Wong and Attali teaches the method of claim 1, Attali further teaches further comprising using the evaluation to partition a group of students according to a similarity measure (see ¶ 29, “the self-learning survey can be configured to gather a total of 2000 responses, begin sampling textual responses for evaluation when the first 20 textual responses are received, and alternate between soliciting new open-ended textual responses every third participant. The link provided to the potential respondents stays ‘live’ until some predetermined total number of responses is reached. Within that time, the system must receive a first set of surveys used to gather potential textual responses, send out the potential textual responses for evaluation, receive the evaluations and then adjust the survey to change the original open-ended question into a closed-ended question.”). The motivation utilized in the combination of claim 1, super, applies equally as well to claim 10. Regarding claim 11. Wong and Attali teaches the method of claim 1, Wong further teaches wherein processing the textual content comprising: accessing a storage to obtain a plurality of close ended questions; and generating a plurality of queries each from a combination of at least one part of the textual content pertaining to users from the group and one of the plurality of close ended questions (see ¶ 19, “The initial users provide answers to the open ended question as an initial set of textual responses at 32 in FIG. 2. After collecting some predetermined number of textual responses, the system automatically uses these textual responses to populate a set of possible textual responses and stores them in the storage 16 of FIG. 1. In the next survey or set of surveys for subsequent participants who access the survey, a subset of the available textual responses is added to the surveys at 34.”). Regarding Claim 12, Claim 12 recites a system comprising a storage and at least one processing circuitry configured to perform the method recited in claim 1. Therefore the rejection of claim 1 above applies equally here. Wong also teaches the addition elements of claim 12 not recited in claim 1 comprising storage and at least one processing circuitry (see ¶ 17, “a computer such as 13 having a processor 26. The computer 13 has a connection, either through the network 18 or directly, to a database or other storage 16.”). Regarding claims 13-22, claims 13-22 are dependent to claim 12 and recites limitations that are similar to the limitations recited in claims 2-11. Therefore, claims 13-22 are rejected with the same rationale applied against claims 2-11 above. Regarding Claim 23, Claim 23 recites a One or more computer program products comprising instructions to perform the method recited in claim 1. Therefore the rejection of claim 1 above applies equally here. Wong also teaches the addition elements of claim 23 not recited in claim 1 comprising one or more processors of a computing system (see ¶ 17, “a computer such as 13 having a processor 26. The computer 13 has a connection, either through the network 18 or directly, to a database or other storage 16.”). Related prior arts: Al-Rikabi et al. (US 12223080 B1) teaches a language model (LM) may be used to score and rank potential synthetic NLQs. A language model is a statistical model that assigns probabilities to words and sentences. Typically, the NLQ query service 110 may be trying to guess the next word w in a sentence given all previous words, often referred to as the “history.” The losses used in training a language model may be used to evaluate if some text make sense or not. Reza et al. (US 20230237277 A1) teaches The relevant features 110 are then used for generating dynamic prompt templates 115. In some instances, a prompt template 115 is generated for each of the extracted relevant features 110. In other instances, a prompt template 115 is generated for each relevant feature (a subset of features) selected from the extracted relevant features 110. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IMAD M KASSIM whose telephone number is (571)272-2958. The examiner can normally be reached 10:30AM-5:30PM, M-F (E.S.T.). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297 - 4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IMAD KASSIM/Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

May 03, 2023
Application Filed
Dec 26, 2025
Non-Final Rejection — §101, §103
Apr 07, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596923
MACHINE LEARNING OF KEYWORDS
2y 5m to grant Granted Apr 07, 2026
Patent 12572843
AGENT SYSTEM FOR CONTENT RECOMMENDATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12572854
ROOT CAUSE DISCOVERY ENGINE
2y 5m to grant Granted Mar 10, 2026
Patent 12566980
SYSTEM AND METHOD HAVING THE ARTIFICIAL INTELLIGENCE (AI) ALGORITHM OF K-NEAREST NEIGHBORS (K-NN)
2y 5m to grant Granted Mar 03, 2026
Patent 12566861
IDENTIFYING AND CORRECTING VULNERABILITIES IN MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+33.8%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 160 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month