Prosecution Insights
Last updated: April 19, 2026
Application No. 17/710,143

APPARATUS, SYSTEM, AND OPERATION METHOD THEREOF FOR EVALUATING SKILL OF USER THROUGH ARTIFICIAL INTELLIGENCE MODEL TRAINED THROUGH TRANSFERRABLE FEATURE APPLIED TO PLURAL TEST DOMAINS

Non-Final OA §101
Filed
Mar 31, 2022
Examiner
PRATT, EHRIN LARMONT
Art Unit
3629
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Socra AI Inc.
OA Round
3 (Non-Final)
15%
Grant Probability
At Risk
3-4
OA Rounds
4y 9m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
52 granted / 338 resolved
-36.6% vs TC avg
Moderate +13% lift
Without
With
+13.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
41 currently pending
Career history
379
Total Applications
across all art units

Statute-Specific Performance

§101
37.1%
-2.9% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101
DETAILED ACTION This communication is a Non-Final Office Action on the merits in response to communications received on 12/29/2025. Claims 1, 5, 7, 9, 10 have been amended. Therefore, claims 1-2 and 5-10 are pending and have been addressed below. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/2025 has been entered. Claim Rejections - 35 USC § 101 2. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 3. Claims 1-2 and 5-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 4. Under Step 1 of the two-part analysis from Alice Corp, claim 1 recites a machine (i.e., concrete thing, consisting of parts, or of certain devices and combination of devices) and claim 10 recites a process (i.e., an act or step, or a series of acts or steps). Thus, each of the claims fall within one of the four statutory categories. 5. Under Step 2A – Prong One of the two-part analysis from Alice Corp, the claimed invention recites an abstract idea. Claims 1 and 10 recite: “receive problem response information and test score information of a reference test domain”, “obtain transferable feature from the problem response information and the test score information of the reference test domain”, “wherein the transferable feature indicates user behavior characteristic or user learning characteristic that can be applied in common to the reference test domain and a target test domain, and the transferable feature includes one of (i) a rate of increase in a test score according to increase in the number of problems answered correctly or (ii) a correlation between a probability of a user's departure during leaning and a test score when the test score decreases in proportion to an increasing probability of the user's departure during…learning”, “wherein an amount of problem response information and test score information of the target test domain is smaller than an amount of the problem response information and the test score information of the reference test domain”, “using the transferable feature as an input such that…can predict transferable feature from feature information and predict a test score of a user from the predicted transferable feature”, “wherein the feature information is information that is usable in common for comparison of relative skills of a plurality of users in the reference test domain and the target test domain”, “wherein predict the transferable feature from the feature information and to predict the test score of the user from the transferable feature”, “transfer… the target test domain to be used for predicting the test score of the user in the target test domain”, “predict the transferable feature from the feature information in the target test domain, and predict the test score of the user from the predicted transferable feature in the target test domain” Under the broadest reasonable interpretation, the limitations recite an abstract idea for skill evaluation and predicting a test score for a user which encompasses concepts such as commercial interactions, (i.e., marketing or sales activities, business relations) and mental processes, (i.e., observation, evaluation, judgement, opinion), that fall within the certain methods of organizing human activity and mental processes groupings of abstract ideas. See MPEP 2106.04 The Applicant’s Specification in at [pgs.1-2] Recently, the Internet and electronic devices have been actively used in each field, and the educational environment is also changing rapidly. In particular, with the development of various educational media, learners may choose and use a wider range of learning methods. Among the learning methods, education services through the Internet have become a major teaching and learning method by overcoming time and space constraints and enabling low-cost education. To keep up with the trend, customized education services, which are not available in offline education due to limited human and material resources, are also diversifying. For example, artificial intelligence is used to provide educational content that is subdivided according to the individuality and ability of a learner so that the educational content is provided according to the individual competency of the learner, which departs from standardized education methods of the past. A user skill evaluation model is an artificial intelligence model that models the degree of knowledge acquisition of a student on the basis of a learning flow of the student. Specifically, the user skill evaluation model refers to, given a record of a problem solved by a student and a response of the student, predicting the probability of a next problem being answered correctly and the resulting test score of the user. In order to generate a user skill evaluation model of a certain test domain, a large amount of actual test score information for model training is required. However, in order to collect the actual score, users need to directly take tests, which requires a lot of time and money for data collection. For example, unlike the probability of a correct answer that is predictable directly from problem-solving data collectable by an Al model, when test scores or grades are predicted actual test score information for directly predicting the test scores or grades is insufficient, and collected offline only in a small amount, such that when compared to the prediction of the probability of a correct answer, the prediction of test scores or grades has lower accuracy. In addition, since generating a user skill evaluation model for each test domain and evaluating the user skill evaluation model are both performed manually by model developers, there is a difficulty in ensuring sufficient performance in real service all the time, and a lot of time and effort is taken to generate the model. Consistent with the disclosure the series steps recite methods for predicting how well a student may score on a test. The acts for predicting the testing score of the user recite “receive”, “obtain”, “using”, “transfer”, “predict” which are collecting and comparing historical testing responses and skill data of the user in order to determine a predicted test score for the user. Thus, the series of steps may be reasonably characterized as mental processes that can be practically performed in the human mind, with or without the use of a physical aid such as pen and paper. Additionally, the series of steps for predicting a test score for a user pertain to business relations that teacher/recruiting professionals typically perform when helping individuals identify suitable colleges and making informed decisions about their future. As such, the claim recites an abstract idea. 6. Under Step 2A – Prong Two of the two-part analysis from Alice Corp, this judicial exception is not integrated into a practical application because the additional elements of: “an apparatus”, “memory storing instructions”, “a processor configured to execute the instructions to:”, “from a user terminal”, “train an artificial intelligence (Al) model by”, “the Al model”, “the training of the AI model comprises training the AI model to”, “the trained Al model to a skill evaluation Al model”, “through the skill evaluation Al model by using an algorithm implemented as a program”, “wherein the processor is further configured to execute the instructions to”, “the skill evaluation model” – see claims 1 and 10 are recited at a high-level of generality in light of the specification [See Fig. 1 and pgs. 1-2, 7-10]. Thus, because the specification describes the additional elements in general terms without describing the particulars, the additional elements may be broadly but reasonably construed as reciting generic computer components being used to aid in performing the abstract idea. Therefore, the additional elements recited in the claim add the words “apply it” with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely use a computer processor as a tool to perform the abstract idea as discussed in MPEP 2106.05 (f). The other additional element of “in response to a change in data of the reference test domain, repeat a process of obtaining the transferable feature to update” adds insignificant extra-solution activity to the judicial exception, as discussed in MPEP 2106.05(g) The other additional elements of: “for predicting a test score through a transferable feature indicating a difference in skills of users in a plurality of test domains” and “the online” as recited is/are an attempt to limit the claimed invention to a field of use or particular technological environment, as discussed in MPEP 2106.05(h). Thus, the additional claim elements are not indicative of integration into a practical application, because the claims do not involve improvements to the functioning of a computer, or to any other technology or technical field (MPEP 2106.05(a)), the claims do not apply or use the abstract idea to effect a particular treatment or prophylaxis for a disease or medical condition (Vanda Memo), the claims do not apply the abstract idea with, or by use of, a particular machine (MPEP 2106.05(b)), the claims do not effect a transformation or reduction of a particular article to a different state or thing (MPEP 2106.05(c)), and the claims do not apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception (MPEP 2106.05(e) and Vanda Memo). Therefore, the claims do not, for example, purport to improve the functioning of a computer. Nor do they effect an improvement in any other technology or technical field. Accordingly, the additional elements do not impose any meaningful limits on practicing the abstract idea and the claims are directed to an abstract idea. 7. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of: “an apparatus”, “memory storing instructions”, “a processor configured to execute the instructions to:”, “from a user terminal”, “train an artificial intelligence (Al) model by”, “the Al model”, “the training of the AI model comprises training the AI model to”, “the trained Al model to a skill evaluation Al model”, “through the skill evaluation Al model by using an algorithm implemented as a program”, “wherein the processor is further configured to execute the instructions to”, “the skill evaluation model” – see claims 1 and 10, amount to nothing more than mere instructions in which to apply the judicial exception and do not provide an inventive concept at Step 2B. The other additional elements of: “in response to a change in data of the reference test domain, repeat a process of obtaining the transferable feature to update” were considered to be insignificant extra-solution activity in Step 2A, and thus re-evaluated in Step 2B to determine if it is more than well-understood, routine, conventional activity in the field. The Symantec, Alice Corp, Ultramercial, Versata Dev. Group Inc court decisions cited in MPEP 2106.05(d)(II) indicated that: “receiving or transmitting data over a network”, “electronic recordkeeping”, “storing and retrieving information in memory”, are all well-understood, routine, conventional activity when claimed in a generic manner. Thus, when considering these elements individually and as a whole with the judicial exception, the claimed invention does not provide an inventive concept at Step 2B. 8. Claims 2, 5-9 are dependent of claim 1. Claim 2 recites “wherein the processor is further configured to execute the instructions to, when a combination of a plurality of transferable features discriminates a difference in skill of the user in the plurality of test domains, allow the transferable feature to include a combination of at least one transferable feature.” which further narrows how the abstract idea may be performed, but does not make the claim any less abstract. Claim 5 recites “wherein the feature information includes response comparison information generated by comparing the problem response information about problems solved in common by two different users in the reference test domain.” which further the data/information recited within the abstract idea, but does not make the claim any less abstract. Claim 6 recites “wherein the response comparison information includes information about a number of problems answered correctly by both of the two different users, a number of problems answered correctly by only one of the two different users, and a number of problems answered incorrectly by both of the two different users.” which further narrows the data/information recited within the abstract idea, but does not make the claim any less abstract. Claim 7 recites “wherein the processor is further configured to execute the instructions to update a weight determined in the training of the AI model to the skill evaluation AI model of the target test domain.” at a high-level of generality and amounts to mere instructions to apply the abstract idea on a computer, as discussed in MPEP 2106.05(f) Claim 8 recites “wherein the processor is further configured to execute instructions to determine a validity of the AI model according to whether the AI model satisfies basic properties of tests or whether the AI model operates normally; and determine a validity of the skill evaluation AI model according to whether the skill evaluation AI model satisfies basic properties of tests or whether the skill evaluation AI model operates normally.” at a high-level of generality and amounts to mere instructions to apply the abstract idea on a computer, as discussed in MPEP 2106.05(f)”, Claim 9 recites “wherein the processor is further configured to execute instructions to, when a test score of Student i is Si, a test score of Student j is Sj, and transferrable feature is Li/(Li+Lj), predict test scores of users in the target test domain, by using, as the score prediction model, a gradient descent model for finding Li that minimizes a value of PNG media_image1.png 78 514 media_image1.png Greyscale which further narrows how the abstract idea may be performed, but does not make the claim any less abstract. The additional elements recited in the dependent claims use computer components or other machinery in their ordinary capacity for economic or other tasks (e.g., to receive, process, or output data) and/or simply adds generic computer components after the fact to the abstract idea which does not integrate the judicial exception into a practical application or provide an inventive concept. Response to Arguments Applicant’s arguments filed 12/29/2025 have been fully considered but they are not persuasive. With Respect to Rejections Under 35 USC 101 Applicant argues “Applicant amends independent claim 1 to additionally recite the following features: the reference test domain/the target test domain wherein an amount of problem response information and test score information of the target test domain is smaller than an amount of the problem response information and the test score information of the reference test domain the training of the AI model comprises training the AI model to predict the transferable feature from the feature information and to predict the test score of the user from the transferable feature, predict the test score of the user from the predicted transferable feature in the target test domain through the skill evaluation AI model by using an algorithm implemented as a program in response to a change in data of the reference test domain, repeat a process of obtaining the transferable feature to update the skill evaluation AI model” The Examiner respectfully disagrees. Contrary to the remarks, the amendments above to claim 1 do not change the previous analysis. Here, the Applicant’s reply merely restates the newly added limitations without discussing how any of the alleged features integrate the judicial exception into a practical application or provide an inventive concept. At best, claim 1 is drafted in a results-oriented manner for evaluating skill and predicting a test score for a user. The limitations gather the data necessary to perform the abstract idea and carry out the analysis steps for predicting the test score which is subject matter that falls within the mental processes and certain methods of organizing human activity groupings. The AI model and skill evaluation model recited in claim 1 were considered additional elements that are recited at a high-level of generality in light of the Specification. [See Fig. 1 and pgs. 1-2, 7-10] At best the AI models are recited as “apply it” (or an equivalent) or mere instructions to implement the abstract idea on a computer. See MPEP 2106.05 (f) None of the cited features above for training or updating the data used by the AI models improve the manner in which AI operates but rather recite the use of generic AI technology being applied to aid in implementing the steps recited in the abstract idea. For these reasons, the rejections under 101 are being maintained. Applicant further argues “The relevant parts of the originally filed specification describe: "In order to generate a user skill evaluation model of a certain test domain, a large amount of actual test score information for model training is required. However, in order to collect the actual score, users need to directly take tests, which requires a lot of time and money for data collection." "In order to solve the limitation, the system 50 for evaluating a skill of a user according to the embodiment of the present invention may use a basic model trained from a reference domain rich in problem response information and test score information as a skill evaluation model of a user in a target domain having insufficient or no data." "A reference domain that is rich in previously collected problem response information and test score information may be assumed to be the Test of English for International Communication (TOEIC). A target domain that is lacking or absent in data may be assumed to be the real estate agent test." "More specifically, the basic model training unit 220 may train a transferable feature prediction model for predicting a transferable feature from feature information and a score prediction model for predicting a test score from the transferable feature." "Score prediction may be performed according to various algorithms that may be implemented as a program. The basic model may then be transferred to a target domain having insufficient or no data and used as a skill evaluation model. The skill evaluation model may be used to predict a test score from user feature information of the target domain." "As is apparent from the above, the apparatus for evaluating a skill of a user, the system for evaluating a skill of a user, and the operation method can effectively evaluate a user's skill even in an educational domain lacking in training data by extracting a transferable feature that can be applied in common to a plurality of tests from a reference domain rich in training data, and using an AI model trained with the extracted transferable feature for evaluation of an education domain having insufficient or no training data. The apparatus for evaluating a skill of a user, the system for evaluating a skill of a user, and the operation method can periodically improve the performance of a skill evaluation model according to an addition of data by repeating extracting a transferable feature and updating the user skill evaluation model in response to a change in data of a reference domain. The apparatus for evaluating a skill of a user, the system for evaluating a skill of a user, and the operation method can effectively predict a test score in a test domain lacking in absolute problem-solving data and test scores by predicting a score using response comparison information obtained by mutual comparison on problem solving results of a plurality of users" The Examiner respectfully disagrees. Contrary to the remarks, the cited passages from Applicant’s Specification do not change the previous analysis. The Specification [See Fig. 1 and pgs. 1-2, 7-10] evidence that the asserted claims are directed to an abstract idea that merely seeks to use computers and AI technology as a tool, not on an improvement in computer capabilities. The passages confirm that the problem facing the inventor was how to perform the abstract idea of evaluating skill of a user and predicting a test score for the user sufficiently. The Specification does not support a finding that the claims are directed to a technological improvement in computer or AI technology. The claimed methods are not rendered patent eligible by the fact that (using existing AI technology) they perform a task undertaken by humans with greater speed and efficiency than could previously be achieved. It is also important for Applicant to note, the courts have previously held the utility of the method does not make it eligible. See Univ. of Fla. Rsch. Found., Inc. v. Gen. Elec. Co., 916 F.3d 1363, 1367 (Fed. Cir. 2019)(automated data synthesis technology did not make claims non-abstract even if it produced "life altering consequences"); In re Elbaum, No. 2023-1418,2023 WL 8794636, at *2 (Fed. Cir. Dec. 20, 2023) (an abstract idea's tax benefits and usefulness did not confer eligibility); In re Mahapatra, 842 F. App'x 635, 638 (Fed. Cir. 2021) ("[T]he fact that an abstract idea may have beneficial uses does not mean that claims embodying the abstract idea are rendered patent eligible."); Western Express Bancshares, LLC v. Green Dot Corp., 816 F. App'x485, 487 (Fed. Cir. 2020) (transforming parties' legal and financial obligations is an abstract idea). At best, the alleged improvements discussed by Applicant from the Specification are recited within the abstract idea, not the generic computing equipment (i.e., AI models) being used to aid in performing the abstract idea. For these reasons, the rejections under 101 are being maintained. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EHRIN PRATT whose telephone number is (571)270-3184. The examiner can normally be reached 8-5 EST Monday-Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynda Jasmin can be reached at 571-272-6782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EHRIN L PRATT/Examiner, Art Unit 3629 /NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Mar 31, 2022
Application Filed
Feb 22, 2025
Non-Final Rejection — §101
Jun 30, 2025
Response Filed
Sep 22, 2025
Final Rejection — §101
Dec 29, 2025
Request for Continued Examination
Dec 31, 2025
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524786
METHODS AND SYSTEMS FOR DETERMINING GUEST SATISFACTION INCLUDING GUEST SLEEP QUALITY IN HOTELS
2y 5m to grant Granted Jan 13, 2026
Patent 12175549
RECOMMENDATION ENGINE FOR TESTING CONDITIONS BASED ON EVALUATION OF TEST ENTITY SCORES
2y 5m to grant Granted Dec 24, 2024
Patent 12079894
GUEST QUARTERS COORDINATION DURING MUSTER
2y 5m to grant Granted Sep 03, 2024
Patent 12057143
SYSTEM AND METHODS FOR PROVIDING USER GENERATED VIDEO REVIEWS
2y 5m to grant Granted Aug 06, 2024
Patent 11941642
QUEUE MANAGEMENT SYSTEM UTILIZING VIRTUAL SERVICE PROVIDERS
2y 5m to grant Granted Mar 26, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
15%
Grant Probability
28%
With Interview (+13.1%)
4y 9m
Median Time to Grant
High
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month