Prosecution Insights
Last updated: April 19, 2026
Application No. 17/427,595

METHOD AND DEVICE FOR PROVIDING TRAINING CONTENT USING AI TUTOR

Non-Final OA §101
Filed
Jul 30, 2021
Examiner
GEBREMICHAEL, BRUK A
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Socra AI Inc.
OA Round
5 (Non-Final)
22%
Grant Probability
At Risk
5-6
OA Rounds
4y 5m
To Grant
47%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
152 granted / 680 resolved
-47.6% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
61 currently pending
Career history
741
Total Applications
across all art units

Statute-Specific Performance

§101
23.8%
-16.2% vs TC avg
§103
36.6%
-3.4% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
27.9%
-12.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 680 resolved cases

Office Action

§101
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 11/03/2025 has been entered. 3. Currently claims 1, 5, 8 and 9 have been amended; claims 4 and 7 have been canceled. Therefore, claims 1-3, 5, 6, 8 and 9 are pending in this application. Claim Rejections - 35 USC § 101 4. Non-Statutory (Directed to a Judicial Exception without an Inventive Concept/Significantly More) 35 U.S.C.101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. ● Claims 1-3, 5, 6, 8 and 9 are rejected under 35 U.S.C.101 because the claimed invention is directed to an abstract idea without significantly more. (Step 1) The current claims fall within one of the four statutory categories of invention (MPEP 2106.03). (Step 2A) [Wingdings font/0xE0] Prong-One: The claims recite a judicial exception(s), namely an abstract idea, as shown below: — Considering each of claims 1 and 9 as representative claims, the following claimed limitations recite an abstract idea: [present] a first [page] including a first chart, wherein the first chart includes a plurality of first prediction values and information on each of the plurality of first prediction values; based on selected question recommendation condition, customize search conditions for a recommended question; [present] a second [page] that includes: (1) a first area on which a preview image of the recommended question and a second chart including a plurality of second prediction values of the recommended question are displayed, (2) a second area on which summary information of the recommended question is displayed, (3) a prediction status [area] for displaying prediction status data including one or more of a predicted score predicted based on solved questions, a trend of the predicted score, and the number of additional questions to be solved until the predicted score is updated, abs when the user solves a question, the prediction status data is updated by reflecting a result of solving the question; [present] a question object of the recommended question corresponding to the preview image, gradually remove the second chart; determine the recommended question corresponding to the search conditions using a model [formulated] based on data including question information on questions and answer information corresponding to the question information, wherein the question information includes a unique ID given to each of the questions, question category information indicating a type of each of the questions, and location information indicating where each of the questions is located with an overall question sequence; obtain the plurality of second prediction values of the recommended question using the model [e.g., formula] based on a training result of the user, determine prediction values of each of a plurality of questions by using the model, and the prediction values of each of the plurality of questions includes a solving completion probability which is a probability of solving a question to the end without terminating the training program after the user starts to solve the question, and a correct answer probability which is a probability of answering the question correctly, wherein plurality of second prediction values of the recommended question include: a solving completion probability which is a probability of solving the recommended question to the end without terminating the training program after the user starts to solve the recommended question, and a correct answer probability which is a probability of answering the recommended question correctly, wherein the determining of the recommended question comprises determining the recommended question among the plurality of questions by using the model based on the prediction values of each of the plurality of questions; further, based on the user solving the recommended question, update the prediction values by reflecting a result of the user solving the recommended question in the model; and further, obtain the correct answer probability of the recommended question by using a [second] model. Thus, the limitations identified above recite an abstract idea since the limitations correspond to certain methods of organizing human activity, and/or mental processes, which are part of the enumerated groupings of abstract ideas identified according to the current eligibility standard (see MPEP 2106.04). For instance, the current claims correspond to managing personal behavior, such as teaching, wherein a user is presented with a first page that presents textual and/or graphic elements—such as: a chart that includes a first prediction values, an object, an option for specifying a question recommendation condition, so that the user customizes search conditions for a recommended question, etc.; and based on the user’s indication to start training, the user is presented with a second page that presents: (1) a preview image of the recommended question and a second chart that includes a plurality of second prediction values of the recommended question, (2) summary information related to the recommended question, (3) a prediction status data that includes at least one predicted score predicted based on solved questions, a trend of the predicted score, and the number of additional questions to be solved until the predicted score is updated, and when the user solves a question, the prediction status data is updated by reflecting a result of solving the question; and furthermore, the preview image is presented to the user as a clear state image, including a question object of the recommended question corresponding to the preview image as a clear state image, wherein the second chart is gradually removed, etc. In this regard, the recommended question above, which corresponds to the customized search conditions, is determined based on a model/algorithm formulated using data that includes: question information on questions and answer information corresponding to the question information (the question information includes “question information on questions and answer information . . . and location information indicating where each of the questions is located within an overall question sequence”); and wherein, the second prediction values are also determined using the model above; wherein the model is used to determine, for each of a plurality of questions, prediction values; and such prediction values for a question, including the second prediction values for the recommended question, reflect (a) the probability of solving the question without terminating the training, and (b) the probability of answering the question correctly, wherein the recommended question is further determined, among the plurality of questions, using the model based on the prediction values of each of the plurality of questions; and furthermore, the prediction values are updated based on the user solving the recommended question by reflecting—in the model—a result of the user solving the recommended question; and wherein a further model is used to obtain the correct answer probability of the recommended question. In addition, given the limitations that recite the process of presenting first and second prediction values, including: predicting status data that includes one or more of a predicted score predicted based on solved questions, a trend of the predicted score, and the number of additional questions to be solved until the predicted score is updated; formulating a mathematical model/algorithm based on collected information (e.g., collected information that includes “question information on questions and answer information . . .and location information indicating where each of the questions is located within an overall question sequence”); and the use of the mathematical model/algorithm above to determine the recommended question and the second prediction values; determine prediction values for each of a plurality of questions; wherein the prediction values for a question, including the recommended question, include: (a) the probability of solving the question without terminating training; (b) the probability of correctly answering the question; and also the process of: (i) using the model to determine the recommended question among the plurality of questions based on the prediction values of each of the plurality of questions; (ii) updating, based on the user solving the recommended question, the prediction values by reflecting a result of the user in the model; and (iii) obtaining the correct answer probability of the recommended question by using an additional model, etc., the current claims also overlap with the abstract idea group mental processes; such as, an evaluation, an observation, a judgment, and/or an opinion, etc. (Step 2A) [Wingdings font/0xE0] Prong-Two: The claims recite additional elements, wherein a learner terminal (or a device comprising a display and a processor) is utilized to facilitate the recited functions regarding: displaying to a user a first page depicting information (“based on a user executing a training program in the learner terminal, displaying a first screen including a first chart and an object for starting a training program on a display of the learner terminal, wherein the first chart includes a plurality of first prediction values and information on each of the plurality of first prediction values”); displaying one or more results based on one or more inputs collected from the user (“based on the user selecting the object for configuring the question recommendation condition, activating a customization configuration window . . . and wherein the second chart is displayed as a clear state image, and the preview image is displayed as a translucent state image and overlaps the second chart, and when the user solves a question, the prediction status data is updated in real time by reflecting a result of solving the question”); displaying one or more additional results based on one or more additional inputs collected from the user (“based on the user performing a predetermined action on the second screen, converting the preview image displayed as the translucent state image to a clear state image, displaying a question object of the recommended question corresponding to the preview image on the display of the learner terminal as a clear state image, gradually increasing a transparency of the second chart and removing the second chart from the second screen”); and wherein a pertinent algorithm—such as an AI—is utilized to analyze the collected and/or stored information and generate one or more results (“determining the recommended question corresponding to the configured search conditions using an artificial intelligence (AI) model trained based on training data including question information . . . and location information indicating where each of the questions is located within an overall question sequence ”, “obtaining the plurality of second prediction values of the recommended question using the AI model based on a training result of the user . . . determining prediction values of each of a plurality of questions by using the AI model, and the prediction values of each of the plurality of questions includes a solving completion probability which is a probability of solving a question to the end without terminating the training program after the user starts to solve the question, and a correct answer probability which is a probability of answering the question correctly, wherein plurality of second prediction values of the recommended question include: a solving completion probability . . . obtaining the correct answer probability of the recommended question by using a recurrent neural network (RNN) AI model or a transformer AI model”), etc. However, the claimed additional elements fail to integrate the abstract idea into a patent-eligible practical application since the additional elements are utilized merely as a tool to facilitate the abstract idea. Thus, when each claim is considered as a whole, the additional elements fail to integrate the abstract idea into a practical application since they fail to impose meaningful limits on practicing the abstract idea. For instance, when each of the claims is considered as a whole, none of the claims provides an improvement over the relevant existing technology. The observations above confirm that the claims are indeed directed to an abstract idea. (Step 2B) Accordingly, when the claim(s) is considered as a whole (i.e., considering all claim elements both individually and in combination), the claimed additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to “significantly more” than the abstract idea itself (also see MPEP 2106). The claimed additional elements are directed to conventional computer elements, which are serving merely to perform conventional computer functions. Accordingly, none of the current claims recites an element—or a combination of elements—directed to an inventive concept. It is worth noting, per the original disclosure, that the claimed process/device is directed to a conventional and generic arrangement of the additional elements. For instance, the disclosure describes a method/system that provides a recommended training content—such as question—to a user based on the analysis of collected information using an artificial intelligence algorithm ([0006]; [0007]); and wherein the system comprises server that communicates with a learner terminal over the conventional wired or wireless network; and thereby the system provides the user with one or more content items (e.g. [0029] to [0031]). It is further worth to note that the utilization of the conventional computer/network technology to facilitate the delivery of pertinent training material(s) to a user(s), including the process of executing a known algorithm(s)—such as an artificial intelligence—to analyze collected data and generate one or more results, etc., is already directed to a well-understood, routine or conventional activity in the art (e.g., see US 2014/0120516; US 2012/0082974; US 2006/0166174, etc.). Similarly, regardless of the field of use and/or the intended purpose, it is also part of the conventional computer/network technology to: (a) utilize an algorithm(s) to predict—based on the analysis of collected data—the portability of a user to correctly perform a task; such as, predicting the provability of the user to correctly answer a given question (e.g., see US 2014/0279727; US 2015/0104766, etc.); and also (b) generate one or more translucent pages or GUIs (e.g., a first translucent page that is superimposed on a second page, etc.), including automatically adjusting the degree of transparency based on the user’s interaction with the page (e.g., see US 5,283,560, etc.). The above observation confirms that the current claimed invention fails to amount to “significantly more” than an abstract idea. It is worth noting that the above analysis already encompasses each of the current dependent claims (i.e., claims 2, 3, 5, 6 and 8). Particularly, each of the dependent claims also fails to amount to “significantly more” than the abstract idea since each dependent claim is directed to a further abstract idea, and/or a further conventional computer element/function utilized to facilitate the abstract idea. Accordingly, when considered as a whole, none of the claims is implementing an element—or a combination of elements—directed to an inventive concept (e.g., none of the claims implements an element—or a combination of elements—that provides an improvement over the conventional technology). ► Applicant’s arguments directed to section §101 have been fully considered (the arguments filed on 11/03/2025). However, the arguments are not persuasive at least for the following reasons: Firstly, while listing the features recited per the previously presented claim 1 (see pages 10-11 of the current argument), Applicant asserts that “the previous claim 1 recites the feature of converting the preview image to a clear state image and displaying a question object as a clear state image and gradually increasing a transparency of the second chart and removing the second chart from the second screen (e.g., FIG. 5) . . . Examiner asserts that claim 1 falls under the method of organizing human activities or the mental processes grouping of abstract idea . . . the Examiner contends that the claimed elements are directed to well-understood, route or conventional activity in the art and do not constitute a technological improvement . . . Applicant respectfully submits that each of the elements and the ordered combination of the elements in claim 1 are not purely conventional and are sufficient to ensure that the claim amounts to significantly more than the judicial exception” (emphasis added). Thus, given Applicant’s arguments above, Applicant appears to be attempting to demonstrate the alleged eligibility of the claim based on Step 2B of the eligibility analysis, i.e., determining whether the claim—when considered as a whole—is directed to a well-understood, routine, conventional activity (hereinafter WRCA) in the art. However, Applicant does not appear to properly apply the WRCA test. In particular, given Applicant’s assertion that “each of the elements and the ordered combination of the elements in claim 1 are not purely conventional” (emphasis added), Applicant appears to rely on the new abstract idea that the claim is reciting in an attempt to support the assumption that the claim is beyond WRCA. Although the test per Step 2B requires the consideration of the claim as a whole, the new abstract idea that the claim is reciting does not necessarily influence the WRCA test. This is mainly because the WRCA test (Step 2B) is concerned with the underlying computer-based technology that the claim is implementing, but not necessarily the new abstract idea. In particular, the WRCA test is evaluating whether the claim, when considered as a whole (ordered combination), is directed to the conventional and generic arrangement of the additional elements. Thus, given the fact that the claimed (and the disclosed) technology is the conventional computer/network technology, none of the current (and previous) claims is beyond WRCA even if the claims are reciting a new abstract idea. In addition, the limitation that Applicant identified above, namely “converting the preview image to a clear state image and displaying a question object as a clear state image and gradually increasing a transparency of the second chart and removing the second chart from the second screen”, does not constitute a technological improvement over the existing computer/network technology. This is because the above is merely signifying the sequence of different content items that the computer is presenting to the user. In particular, no technological improvement is achieved regardless of whether one or more of the items (e.g., the chart, the preview image, and/or the question object) is being: (i) displayed as a transparent/clear image, (ii) converted into a clear/transparent image, and/or (iii) removed gradually/instantly, etc. This is again because the claimed (and disclosed) conventional computer is merely presenting, as part of its conventional computer functions, one or more sequences of content items. Thus, except for utilizing the conventional computer/network technology—merely as a tool—to facilitate the new abstract idea, neither the current claims (or the previous claims) nor the original disclosure—when considered as a whole—implements an inventive concept that amounts to “significantly more” than an abstract idea. Thus, Applicant’s arguments are not persuasive. Secondly, while listing part of the features that currently amended claim 1 is reciting, including parts of the specification and drawings that provide support regarding the amendment (see pages 12-16 of the argument), Applicant is asserting that “claim 1 is directed to patent eligible subject matter under 35 USC 101 . . . The underlying idea in the claims is not merely an abstract idea . . . Under Prong One of Step 2A, claim 1 does not recite any of the three groupings of subject matter that may be considered as an abstract idea . . . the amended claim 1 is not merely the method of organizing the human activity or the mental process . . . under Prong Two of Step 2A, even assuming that claim 1 recites any alleged judicial exception, the alleged judicial exception is integrated into a practical application of the exception. In particular, claim 1 recites the features of displaying the screens and determining the recommended question using an artificial intelligence (AI) model trained based on training data of the user and the plurality of second prediction values of the recommended question obtained using the AI model based on a training result of the user . . . claim 1 provide an improvement of the existing method (e.g., paragraphs [0078]-[0080] of the specification)” (emphasis added). However, Applicant does not appear to articulate a rationale (if any) to substantiate any of the assertions above. For instance, regarding prong-one of Step 2A, despite asserting that “claim 1 does not recite any of the three groupings of subject matter that may be considered as an abstract idea” (emphasis added), Applicant does not appear to provide any rationale to substantiate the above assertion. In contrast, part of the abstract idea, which current claim 1 is reciting, corresponds to the provision of a relevant training material to a user; such as, the process of providing one or more content items to the user (e.g., information in the form of a first chart that includes prediction scores, etc.), including: (a) allowing the user to specify a condition(s) for customizing a recommended question, and (b) subsequently presenting the user with the relevant training material (e.g., a preview image of the recommended question, prediction values for the recommended question, summary information regarding the recommended question), etc. Of course, the above corresponds to at least one of the abstract idea groups, namely certain methods of organizing human activity (e.g., managing personal behavior). Consequently, Applicant’s conclusory assertions, “[t]he underlying idea in the claims is not merely an abstract idea . . . claim 1 does not recite any of the three groupings of subject matter that may be considered as an abstract idea . . . claim 1 is not merely the method of organizing the human activity or the mental process”, are not persuasive. Similarly, regarding prong-two of Step 2A, Applicant does not present whether any of the claims is implementing an element—or a combination of elements—that provides a technological improvement over the relevant existing technology. Instead, Applicant appears to be emphasizing the GUIs that the computer is displaying, including the use of an AI model to recommend a question, wherein the AI model is trained based on collected information (e.g., information relating to the user and the plurality of second prediction values, etc.). However, none of the above—alone or in combination—provides any technological improvement over the relevant existing technology. This is because it is part of the existing computer/network technology to generate one or more GUIs based on the type of task being performed. Similarly, it is part of the existing computer/network technology to train a machine-learning algorithm(s), including an AI model, using collected information; wherein the algorithm is also updated based on newly collected information, etc. In contrast, given the underlying technology of the currently claimed (and the originally disclosed) method/system, an integration (if any) of the abstract idea into a patent-eligible practical application is demonstrated if any of the claims is reciting an element—or a combination of elements—that provides a technological improvement over the existing computer/network technology. Thus, due to the lack of technological improvement, each of the claims—when considered as a whole—fails to integrate the abstract idea into a patent-eligible practical application. Consequently, Applicant’s arguments are not persuasive. Applicant also appears to be attempting to substantiate the alleged integration of the abstract idea while asserting that “the operations of claim 1 provide an improvement of the existing method” (emphasis added). However, given the fact regarding the new abstract idea that the current claims are reciting, the claims may implement a method with new sequence of steps; however, this does not necessarily imply a technological improvement. Even the sections that Applicant has cited (i.e., [0078] to [0080]) are describing one or more optional alternatives, which the system may use for displaying one or more of the content items to the user; such as, the presenting a question preview image that “may” overlap a chart, wherein the question preview image “may be” displayed as a translucent image; and furthermore, when the user wants to solve the recommended question, (a) the question preview image “may be” converted from a translucent image to a clear image; and (b) the chart “may” gradually increase in transparency, etc. Accordingly, except for presenting generic description regarding some optional ways for displaying one or more of the content items to the user, the original disclosure does not appear to contemplate—much less implement—any new/advanced technology beyond the existing or conventional computer/network technology. Consequently, none of Applicant’s conclusory assertions regarding the current claims, “[t]he claim as a whole is not "directed to" any alleged judicial exception and are patent eligible under Step 2A . . . Additional elements or combination of elements in the claims are sufficient to ensure that the claim amounts to significantly more than the judicial exception . . . the additional elements or the combination of the elements in the claims transform the idea into patent-eligible subject matter . . . each of the operations and the ordered combination of the operations in claim 1 are not purely conventional and are a clear practical application of any alleged abstraction”, is persuasive. Thus, at least for the reasons discussed above, the Office concludes that none of the current claims complies with the eligibility criteria established per section §101. Prior Art ● Considering each of claims 1 and 9 as a whole (including the respective dependent claims), the prior art does not teach or suggest the invention as currently claimed (regarding the state of the prior art, see the office-action dated 03/28/2024). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRUK A GEBREMICHAEL whose telephone number is (571) 270-3079. The examiner can normally be reached on 7:00AM-3:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID LEWIS can be reached on (571) 272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRUK A GEBREMICHAEL/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Jul 30, 2021
Application Filed
Mar 22, 2024
Non-Final Rejection — §101
Jun 28, 2024
Response Filed
Aug 24, 2024
Final Rejection — §101
Nov 29, 2024
Request for Continued Examination
Dec 04, 2024
Response after Non-Final Action
Jan 10, 2025
Non-Final Rejection — §101
Apr 15, 2025
Response Filed
May 05, 2025
Applicant Interview (Telephonic)
May 05, 2025
Examiner Interview Summary
Jun 28, 2025
Final Rejection — §101
Nov 03, 2025
Request for Continued Examination
Nov 04, 2025
Response after Non-Final Action
Nov 15, 2025
Non-Final Rejection — §101
Dec 23, 2025
Examiner Interview Summary
Dec 23, 2025
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12165542
MOTION PLATFORM
2y 5m to grant Granted Dec 10, 2024
Patent 12008914
SYSTEMS AND METHODS TO SIMULATE JOINING OPERATIONS
2y 5m to grant Granted Jun 11, 2024
Patent 11990055
SURGICAL TRAINING MODEL FOR LAPAROSCOPIC PROCEDURES
2y 5m to grant Granted May 21, 2024
Patent 11837105
PSEUDO FOOD TEXTURE PRESENTATION DEVICE, PSEUDO FOOD TEXTURE PRESENTATION METHOD, AND PROGRAM
2y 5m to grant Granted Dec 05, 2023
Patent 11810467
FINGER RECOGNITION SYSTEM AND METHOD FOR USE IN TYPING
2y 5m to grant Granted Nov 07, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
22%
Grant Probability
47%
With Interview (+25.0%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 680 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month