Prosecution Insights
Last updated: April 19, 2026
Application No. 18/589,679

AI-GENERATED ESSAY FEEDBACK FOR ASSISTING TUTORS

Non-Final OA §101§103
Filed
Feb 28, 2024
Examiner
UTAMA, ROBERT J
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Paper Education Company Inc.
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
90%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
483 granted / 803 resolved
-9.9% vs TC avg
Strong +30% interview lift
Without
With
+30.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
54 currently pending
Career history
857
Total Applications
across all art units

Statute-Specific Performance

§101
22.9%
-17.1% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
19.3%
-20.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 803 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-16, 21-22 and 25-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exception(s) without significantly more. [STEP 1] The claim recites at least one step or structure. Thus, the claim is to a process or product, which is one of the statutory categories of invention (Step 1: YES). [STEP2A PRONG I] The claims 1, 11 and 21 recite(s): Claim 1 recites: A non-transitory computer-readable medium storing code which when executed by one or more processors of one or more computing devices causes the one or more computing devices to assist a human tutor to assess an essay written by a student, the one or more processors being configured to: analyze the essay using a Large Language Model (LLM) to output Al-generated suggested written corrective feedback to the human tutor via a user interface to enable human-in-the-loop (HITL) review of the AI-generated suggested written corrective feedback; receive input from the human tutor via the user interface to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback; and communicate the HITL-AI written corrective feedback to the student. Claim 11 recites: A computer-implemented method of assisting a human tutor in tutoring a student in writing an essay, the method comprising: analyzing the essay using a Large Language Model (LLM) to output AI-generated suggested written corrective feedback to the human tutor via a user interface to enable human-in-the-loop (HITL) review of the AI-generated suggested written corrective feedback; receiving input from the human tutor via the user interface to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback; and communicating the HITL-AI written corrective feedback to the student. Claim 21 recites: A computer system for assisting a human tutor to assess an essay written by a student, the system comprising: a tutor computing device for the human tutor to view the essay; one or more tutoring platform servers to receive the essay from the student and to transmit the essay to the tutor computing device; a Large Language Model (LLM) server that hosts a Large Language Model, the LLM server being configured to receive one or more prompts from the one or more tutoring platform servers to cause the LLM to analyze the essay and to output AI- generated suggested written corrective feedback to the one or more tutoring platform servers and tutor computing device for viewing by the human tutor to enable human-in- the-loop (HITL) review of the AI-generated suggested written corrective feedback by the human tutor; wherein the tutor computing device receives input from the human tutor to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback; and wherein the tutor computing device communicates the HITL-AI written corrective feedback to the one or more tutoring platform servers and wherein the one or more tutoring platform servers communicates the HITL-AI written corrective feedback to the student. The non-highlighted aforementioned limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation between people but for the recitation of generic computer components. That is, other than reciting “computer system”, “tutor computing device”, “tutoring platform servers”, “Large Language Model server”, nothing in the claim element precludes the step from practically being performed between people. For example, but for the recited language, the step in the context of this claim encompasses a teacher receiving essays from the student, grading the essay and providing corrective feedback to the students. If a claim limitation, under its broadest reasonable interpretation, covers managing interactions between people, then it falls within the “Organization of Human Activity” grouping of abstract ideas. Accordingly, the claim recites a judicial exception, and the analysis must therefore proceed to Step 2A Prong Two. [STEP2A PRONG II] This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional element(s) – “computer system”, “tutor computing device”, “tutoring platform servers”, “Large Language Model server”. The “computer system”, “tutor computing device”, “tutoring platform servers”, “Large Language Model server”, in the aforementioned steps are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, the additional element(s) do(es) not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea and the claim is therefore directed to the judicial exception. (Step 2A: YES). [STEP2B] The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform the aforementioned steps amounts to no more than mere instructions to apply the exception using a generic computer component, which cannot provide an inventive concept (for example, see paragraph 17 showing generic computing devices and paragraph 19 showing generic over-the-counter LLMs). As noted previously, the claim as a whole merely describes how to generally “apply” the aforementioned concept in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. The claim is not patent eligible. (Step 2B: NO). Claim(s) 2-16, 22 and 25-26 are dependent on supra claim(s) and includes all the limitations of the claim(s). Therefore, the dependent claim(s) recite(s) the same abstract idea. For example, claims 2-6, 9, 13-14 are directed to different type rubrics criteria for evaluation of the comment; a type of abstract ideas; claims 5-8, 15-16, 22 and 26 are directed on instruction that label comment and create a classifier, a technological environment. The claim recites no additional limitations. Accordingly, the additional element(s) do(es) not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea and the claim is therefore directed to the judicial exception. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5, 11-12, 15 and 21-22, 25 are rejected under 35 U.S.C. 103 as being unpatentable over Sandrew US 11,990,139 in view of Motive EP 4163815 Claim 1 and 11: The Sandrew reference provides a teaching of A non-transitory computer-readable medium storing code which when executed by one or more processors of one or more computing devices causes the one or more computing devices to assist a human tutor to assess an essay written by a student (see col. 11:20-25 “the AI system may also be used to evaluate and grade students' written essays or other student output”), the one or more processors being configured to: analyze the essay using a Large Language Model (LLM) to review of the AI-generated suggested written corrective feedback (col. 11:5-15 analyzing the essay to provide a feedback); and communicate the HITL-AI written corrective feedback to the student (see col. 10:55-67). The Sandrew reference is silent on the teaching of output AI-generated suggested written corrective feedback to the human tutor via a user interface to enable human-in-the-loop (HITL), receive input from the human tutor via the user interface to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback; and communicate the HITL-AI written corrective feedback to the student. However, the Motive reference provides a teaching of output AI-generated suggested written corrective feedback to the human tutor via a user interface to enable human-in-the-loop (HITL) (see paragraph 75 allowing the instructor to modify the suggested feedback), receive input from the human tutor via the user interface to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback (see paragraph 75 “A display (e.g., display 606 of FIG. 6) may provide the student's essay, including the score, comments, or a combination thereof, to the instructor for additional, optional, manual alteration”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of receive input from the human tutor via the user interface to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback, as taught by the Motive reference, in order to provide a more fair and unbiased feedback to the student (see paragraph 10). Claims 2, 12 and 22: The Sandrew reference provides a teaching of wherein the AI- generated suggested written corrective feedback is evaluated based on a plurality of rubric dimensions of a feedback rubric that represent desired feedback qualities (see col. 11:25-35 the AI system 102 generate assessment system 152 using all available information including… criteria the teacher have specified for evaluating the written assignment). Claims 5, 15 and 25: The Sandrew reference provides a teaching of comprising code for crafting prompts to obtain the AI-suggested written corrective feedback from the LLM (see col. 11:24-35). Claim 21: The Sandrew reference provides a teaching of a computer system for assisting a human tutor to assess an essay written by a student, the system comprising: a tutor computing device for the human tutor to view the essay (see col. 9:27-45 teacher computer 110); one or more tutoring platform servers to receive the essay from the student and to transmit the essay to the tutor computing device (see col. 8:30-45 item 103 LLM server); a Large Language Model (LLM) server that hosts a Large Language Model, the LLM server being configured to receive one or more prompts from the one or more tutoring platform servers to cause the LLM to analyze the essay (col. 11:5-15 analyzing the essay to provide a feedback); and to output AI- generated suggested written corrective feedback to the one or more tutoring platform servers (see col. 10:35-50). wherein the tutor computing device communicates the HITL-AI written corrective feedback to the one or more tutoring platform servers and wherein the one or more tutoring platform servers communicates the HITL-AI written corrective feedback to the student (see col. 10:55-67). The Sandrew reference is silent on the teaching of wherein the tutor computing device receives input from the human tutor to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback; and tutor computing device for viewing by the human tutor to enable human-in- the-loop (HITL) review of the AI-generated suggested written corrective feedback by the human tutor. However, the Motive reference provides a teaching of output AI-generated suggested written corrective feedback to the human tutor via a user interface to enable human-in-the-loop (HITL) (see paragraph 75 allowing the instructor to modify the suggested feedback), wherein the tutor computing device receives input from the human tutor to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback (see paragraph 75 “A display (e.g., display 606 of FIG. 6) may provide the student's essay, including the score, comments, or a combination thereof, to the instructor for additional, optional, manual alteration”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of wherein the tutor computing device receives input from the human tutor to accept, reject or edit the AI-generated suggested written corrective feedback to thereby constitute HITL-AI written corrective feedback; and tutor computing device for viewing by the human tutor to enable human-in- the-loop (HITL) review of the AI-generated suggested written corrective feedback by the human tutor, as taught by the Motive reference, in order to provide a more fair and unbiased feedback to the student (see paragraph 10). Claims 3-4 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Sandrew US 11,990,139, in view of Motive EP 4163815, and further in view of Liu et al Reviewer GPT1 Claim 3 and 13: The Sandrew reference is silent on the teaching of a first rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an encouraging comment; a second rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an inquiry-based comment; and a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment. The Liu et al reference provides a teaching of a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment (page 5 paragraph 2 rubic for specific feedback comment). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment, as taught by the Liu reference, in order to provide the user with a realistic and accurate feedback to the student. While the combination of the Sandrew and Liu provide different rubric to evaluate the AI suggestion feedback, it is silent on the exact limitation of a first rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an encouraging comment; a second rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an inquiry-based comment. The examiner takes the position that at the time the invention was made, it would have been an obvious matter of design choice to a person of ordinary skill in the art to use different written rubric to evaluate written feedback of the LLM output because Applicant has not disclosed that the exact wording for the first and second rubric provides an advantage, is used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected the Liu’s rubric and applicant’s invention, to perform equally well for evaluating the quality of the LLM written feedback. Therefore, it would have been prima facie obvious to modify to obtain the invention as specified in claim 3, 9 and 13 because such a modification would have been considered a mere design consideration which fails to patentably distinguish over the prior art of Sandrew and Liu. Claims 4 and 14: The Sandrew reference is silent on the teaching of wherein the plurality of rubric dimensions comprises:a first rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an encouraging comment; a second rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an inquiry-based comment; a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment; a fourth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is suitable for a student level; a fifth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is entirely positive; a sixth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is unnecessarily repetitive by restating a same issue previously addressed; a seventh rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is unsafe; and an eighth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is inaccurate. However, the Liu reference provides a teaching of third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment(page 5 paragraph 2 rubic for specific feedback comment). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment, as taught by the Liu reference, in order to provide the user with a realistic and accurate feedback to the student. While the combination of the Sandrew and Liu provide different rubric to evaluate the AI suggestion feedback, it is silent on the exact limitation of first rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an encouraging comment; a second rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an inquiry-based comment; a fourth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is suitable for a student level; a fifth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is entirely positive; a sixth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is unnecessarily repetitive by restating a same issue previously addressed; a seventh rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is unsafe; and an eighth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is inaccurate. The examiner takes the position that at the time the invention was made, it would have been an obvious matter of design choice to a person of ordinary skill in the art to use different written rubric to evaluate written feedback of the LLM output because Applicant has not disclosed that the exact wording for the first and second rubric provides an advantage, is used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected the Liu’s rubric and applicant’s invention, to perform equally well for evaluating the quality of the LLM written feedback. Therefore, it would have been prima facie obvious to modify to obtain the invention as specified in claim 4, 10 and 14 because such a modification would have been considered a mere design consideration which fails to patentably distinguish over the prior art of Sandrew and Liu. Claim(s) 6, 16 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Sandrew US 11,990,139 in view of Motive EP 4163815, in view of Steedman et al US 11,132,988 and further in view of Kwak KR 20230071673 Claims 6, 16 and 26: The Sandrew reference is silent on the teaching of comprising code that causes the one or more computing devices to evaluate the AI-generated suggested written corrective feedback, the one or more processor being configured to: create a dataset of comments; label the comments, by human expert reviewers, according to a plurality of rubric dimensions to create a labeled dataset; and However, the Steedman et al provides a teaching of create a dataset of comments (see col. 20:25-35) label the comments, by human expert reviewers, according to a plurality of rubric dimensions to create a labeled dataset (col. 6:60-67). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of create a dataset of comments; label the comments, by human expert reviewers, according to a plurality of rubric dimensions to create a labeled dataset, as taught by the Steedman reference in order to ensure for an efficient training (see col. 1:33-36). The Sandrew reference is silent on the teaching of create a binary classifier to classify new comments based on each one of the plurality of rubric dimensions. However, the Kwak reference provides a teaching of create a binary classifier to classify new comments based on each one of the plurality of rubric dimensions (see page 8 second paragraph). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the with the feature of create a binary classifier to classify new comments based on each one of the plurality of rubric dimensions, as taught by the Kwak reference, in order to provide a more realistic response (see page 4 paragraph 9). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sandrew US 11,990,139 in view of Motive EP 4163815, in view of Steedman et al US 11,132,988 and further in view of Kwak KR 20230071673 and further in view of Maschmeyer et al US 20240256792 Claim 7: The Sandrew reference is silent on the teaching of wherein the dataset of comments includes both AI-generated comments and human-written comments. However, the Maschmeyer reference provide a teaching of wherein the dataset of comments includes both AI-generated comments and human-written comments (see paragraph 50 training dataset including papers and LLM output). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of wherein the dataset of comments includes both AI-generated comments and human-written comments, as taught by the Maschmeyer reference, in order to provide continual improvement to the system (see paragraph 202). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Sandrew US 11,990,139 in view of Motive EP 4163815, in view of Steedman et al US 11,132,988 and further in view of Kwak KR 20230071673, in view of Maschmeyer et al US 20240256792 and further in view of Dietrich 2 Claim 8: The Sandrew reference is silent on the teaching of code to craft prompts based on classification results indicative of whether the comments adhere or not to the plurality of rubric dimension. However, the Dietrich reference providesa teaching of craft prompts based on classification results indicative of whether the comments adhere or not to the plurality of rubric dimension (see paragraph 27 last paragraph). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of code to craft prompts based on classification results indicative of whether the comments adhere or not to the plurality of rubric dimension, as taught by the Deitrich, in order to improve the accuracy of the assessment (see page 27 paragraph 2). Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Sandrew US 11,990,139 in view of Motive EP 4163815, in view of Steedman et al US 11,132,988 and further in view of Kwak KR 20230071673, in view of Maschmeyer et al US 20240256792 and further in view of Dietrich and further in view of Liu et al Reviewer GPT Claim 9: The Sandrew reference is silent on the teaching of a first rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an encouraging comment; a second rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an inquiry-based comment; and a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment. The Liu et al reference provides a teaching of a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment (page 5 paragraph 2 rubic for specific feedback comment). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment, as taught by the Liu reference, in order to provide the user with a realistic and accurate feedback to the student. While the combination of the Sandrew and Liu provide different rubric to evaluate the AI suggestion feedback, it is silent on the exact limitation of a first rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an encouraging comment; a second rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an inquiry-based comment. The examiner takes the position that at the time the invention was made, it would have been an obvious matter of design choice to a person of ordinary skill in the art to use different written rubric to evaluate written feedback of the LLM output because Applicant has not disclosed that the exact wording for the first and second rubric provides an advantage, is used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected the Liu’s rubric and applicant’s invention, to perform equally well for evaluating the quality of the LLM written feedback. Therefore, it would have been prima facie obvious to modify to obtain the invention as specified in claim 9 because such a modification would have been considered a mere design consideration which fails to patentably distinguish over the prior art of Sandrew and Liu. Claims 10: The Sandrew reference is silent on the teaching of wherein the plurality of rubric dimensions comprises:a first rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an encouraging comment; a second rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an inquiry-based comment; a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment; a fourth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is suitable for a student level; a fifth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is entirely positive; a sixth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is unnecessarily repetitive by restating a same issue previously addressed; a seventh rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is unsafe; and an eighth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is inaccurate. However, the Liu reference provides a teaching of third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment(page 5 paragraph 2 rubic for specific feedback comment). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Sandrew reference with the feature of a third rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is a specific comment, as taught by the Liu reference, in order to provide the user with a realistic and accurate feedback to the student. While the combination of the Sandrew and Liu provide different rubric to evaluate the AI suggestion feedback, it is silent on the exact limitation of first rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an encouraging comment; a second rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is an inquiry-based comment; a fourth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is suitable for a student level; a fifth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is entirely positive; a sixth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is unnecessarily repetitive by restating a same issue previously addressed; a seventh rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is unsafe; and an eighth rubric dimension to evaluate whether the AI-generated suggested written corrective feedback is inaccurate. The examiner takes the position that at the time the invention was made, it would have been an obvious matter of design choice to a person of ordinary skill in the art to use different written rubric to evaluate written feedback of the LLM output because Applicant has not disclosed that the exact wording for the first and second rubric provides an advantage, is used for a particular purpose, or solves a stated problem. One of ordinary skill in the art, furthermore, would have expected the Liu’s rubric and applicant’s invention, to perform equally well for evaluating the quality of the LLM written feedback. Therefore, it would have been prima facie obvious to modify to obtain the invention as specified in claim 10 because such a modification would have been considered a mere design consideration which fails to patentably distinguish over the prior art of Sandrew and Liu. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT J UTAMA whose telephone number is (571)272-1676. The examiner can normally be reached 9:00 - 17:30 Monday - Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kang Hu can be reached at (571)270-1344. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT J UTAMA/Primary Examiner, Art Unit 3715 1 Liu, Ryan and Shah, Nihar. ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing. arXiv:2306.00622v1 [cs.CL]. June 2023 2 Dietrich, Felix TJ. “Leveraging LLMs for Automated Feedback Generation on Exercises” Master Thesis. Technical University of Munich. September 2023
Read full office action

Prosecution Timeline

Feb 28, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603016
WEARABLE TERMINAL, PRESENTATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12562072
ADAPTIVE AUDIO AND AUDIOVISUAL RECURSIVE SELF-FEEDBACK FOR SPEECH THERAPY
2y 5m to grant Granted Feb 24, 2026
Patent 12548457
METHOD AND ARRANGEMENT FOR ASSISTED EXECUTION OF AN ACTIVITY
2y 5m to grant Granted Feb 10, 2026
Patent 12542070
TEACHING AID
2y 5m to grant Granted Feb 03, 2026
Patent 12536788
TRACKING DIET AND NUTRITION USING WEARABLE BIOLOGICAL INTERNET-OF-THINGS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
90%
With Interview (+30.0%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 803 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month