Prosecution Insights
Last updated: April 19, 2026
Application No. 17/973,320

METHOD OF RECOMMENDING DIAGNOSTIC TEST FOR USER EVALUATION

Final Rejection §101§103
Filed
Oct 25, 2022
Examiner
YIP, JACK
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Socra Al Inc.
OA Round
2 (Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
229 granted / 702 resolved
-37.4% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
51 currently pending
Career history
753
Total Applications
across all art units

Statute-Specific Performance

§101
22.8%
-17.2% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In response to the amendment filed 11/24/2025; claims 1 – 2, 4 – 8 and 10 - 12 are pending; claims 3 and 9 have been cancelled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 2, 4 – 8 and 10 - 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Is the claimed invention a statutory category of invention? Claims 1 and 7 are directed to a method / device for recommending a diagnostic test question for user evaluation (Step 1, Yes). Step 2A, Prong 1: Does the claim recite an abstract idea? The limitation of steps: … generating, by the AI processor, a first matrix indicating whether users answer-all questions correctly or incorrectly, wherein the first matrix includes labeled interactions and unlabeled interactions, and each of the labeled interactions has a value of 1 when a user of the users answers a question correctly or a value of 0 when the user of the users answers the question incorrectly; generating, by the diagnostic test selection unit, a second matrix based on the first matrix using knowledge tracing (KT), wherein the generating of the second matrix comprises generating the second matrix by using the KT to simulate the unlabeled interactions in the first matrix such that each of the unlabeled interactions has a value of 1 indicating a corresponding question being answered correctly or a value of 0 indicating the corresponding question being answered incorrectly in the second matrix; generating, by the diagnostic test selection unit, a second matrix based on the first matrix using knowledge tracing (KT), wherein the generating of the second matrix comprises generating the second matrix by using the KT to simulate the unlabeled interactions in the first matrix such that each of the unlabeled interactions has a value of 1 indicating a corresponding question being answered correctly or a value of 0 indicating the corresponding question being answered incorrectly in the second matrix; selecting, by the diagnostic test selection unit, the diagnostic test questions using Lasso regression based on the second matrix; and training, by the AI processor, the AI model for predicting the user's score using linear regression by inputting the diagnostic test questions as input values to the AI model as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. The claimed method akin to mental of a teacher for observations, evaluations, and judgements. The mere nominal recitation of by an electronic device for performing these steps does not take the claim limitation outside of the mental processes grouping. Thus, the claim recites a mental process (Step 2A, Prong 1: yes). The claim is then analyzed to determine whether it is directed to any judicial exceptions. The claim recites the steps of generating a first matrix indicating whether users answer all questions correctly; generating a second matrix based on the first matrix using knowledge tracing (KT); and. As discussed above, these steps describe a mathematical relationship which has been found by the courts to be an abstract idea (Step 2A, Prong 1: yes). Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Per the 2019 Revised Patent Subject Matter Eligibility Guidance, if a claim as a whole integrates the recited judicial exception into a practical application of that exception, a claim is not "directed to" a judicial exception. Alternatively, a claim that does not integrate a recited judicial exception into a practical application is directed to the exception. Evaluating whether a claim integrates an abstract idea into a practical application is performed by a) identifying whether there are any additional elements recited in the claim beyond the abstract idea, and b) evaluating those additional elements individual and in combination to determine whether they integrate the abstract idea into a practical application, using one or more of the considerations laid out by the Supreme Court and the Federal Circuit. Exemplary considerations indicative that an additional element (or combination of elements) may have or has not been integrated into a practical application are set forth in the 2019 PEG. With respect to the instant claims, claim 1 and 7 recite the additional elements of: An electronic device and [A]n electronic device for recommending a diagnostic test question for user evaluation, the electronic device comprising: a communication module configured to communicate with a terminal; a memory; an artificial intelligence (AI) processor. It is particularly noted that the use of an electronic device "as a tool" to perform an abstract method are indicated in the 2019 PEG as examples that an additional element has not been integrated into a practical application. Even in combination, the recited additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits, such as an improvement to a computing system, on practicing the abstract idea (STEP 2A, Prong 2: NO). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Claim 1 and 7 recite the additional elements of: An electronic device and [A]n electronic device for recommending a diagnostic test question for user evaluation, the electronic device comprising: a communication module configured to communicate with a terminal; a memory; an artificial intelligence (AI) processor set forth above for Step 2A, Prong 2. Regarding these limitations: Applicant's specification describes these features in generic manner "… The system 20 for recommending a diagnostic test is a computing device capable of learning neural networks, and may be implemented in various electronic devices such as a server, a desktop personal computer (PC), a notebook PC, and a tablet PC. The AI processor 21 may learn the AI model using a program stored in the memory 25. In particular, the AI processor 21 may learn an AI model for performing a task of predicting a user's score using linear regression. For example, the AI processor 21 may be trained to predict a user's score using diagnostic test questions. Meanwhile, the AI processor 21 for performing the functions as described above may be a general purpose processor (for example, a central processing unit (CPU)), but may be an AI dedicated processor (for example, a graphics processing unit (GPU)) for AI learning” in the Applicant’s specification, para. [0046] – [0048]). There is no indication in the Specification that Applicants have achieved an advancement or improvement in computer for recommending a diagnostic test question. Dependent claims 2, 4 – 6, 8 and 10 - 12 inherit the deficiencies of their respective parent claims through their dependencies and do not recite additional limitations sufficient to direct the claims to more than the claimed abstract idea, and are thus rejected for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1,4,7 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Pu et al. (US 2021/0390873 A1) in view of Zelenka et al. (US 2014/0188442 A1) and Zheng et al. (US 2021/0374279 A1). Re claims 1, 7: Pu teaches 1. A method of recommending diagnostic test questions for user evaluation by an electronic device (Pu, Abstract) including an artificial intelligence (AI) processor, a diagnostic test selection unit and a memory storing an AI model for predicting a user's score and instructions executable by the AI processor (Pu, Abstract, “the trained machine learning knowledge tracing engine”; [0065], “set of one or more predictions based on the computation executed throughout the neural network, including a prediction of a input node corresponding with at least one test item, to a prediction of an output node corresponding with at least one skill mapping the test item, knowledge trace, user interaction with the test item, and/or learner profile”), the method comprising: generating, by the AI processor, a first matrix indicating whether users answer-all questions correctly or incorrectly, wherein the first matrix includes labeled interactions and unlabeled interactions, and each of the labeled interactions has a value of 1 when a user of the users answers a question correctly or a value of 0 when the user of the users answers the question incorrectly (Pu, [0009], “mapping matrix W can be initialized … can be 0 or 1, representing whether the response is incorrect or correct”; [0081], “For this example, c;=0 or 1, and can represent whether the response is incorrect or correct”); generating, by the diagnostic test selection unit, a second matrix based on the first matrix using knowledge tracing (KT) (Pu, [0057], “knowledge tracing and/or test item suggestion”; [0081], “expert labeled question item-skill mapping, the mapping matrix W can be initialized using the expert assigned tags, labels, or metadata”; [0007] – [0008], “creating a skill embedding matrix S which represents the latent skills for each learner interaction. Each column of skill embedding matrix S can be a vector representation of one of the latent skills of the interaction xi … for each learner interaction xi in X, a respective question item qi is tagged with a skill tag array sj, and the interaction-skill mapping matrix W is initialized using the skill tag sj); selecting, by the diagnostic test selection unit, the diagnostic test questions based on the second matrix (Pu, [0057], “knowledge tracing and/or test item suggestion”; [0081], “expert labeled question item-skill mapping, the mapping matrix W can be initialized using the expert assigned tags, labels, or metadata”; [0007] – [0008], “creating a skill embedding matrix S which represents the latent skills for each learner interaction. Each column of skill embedding matrix S can be a vector representation of one of the latent skills of the interaction xi … for each learner interaction xi in X, a respective question item qi is tagged with a skill tag array sj, and the interaction-skill mapping matrix W is initialized using the skill tag sj); and training, by the AI processor, the AI model for predicting the user's score using linear regression by inputting the diagnostic test questions as input values to the AI model (Pu, [0075], “This matrix can be updated (thus is trainable). For question/skill mappings which are used once in a sequence, it is likely the system will retain the initialized matrix. For question-skill mappings which are repeated in an interaction sequence, the trainable model will likely update the matrix”; [0066], “The training may include updating the weight value to decrease the loss or error of the network”). 7. An electronic device for recommending diagnostic test questions for user evaluation, the electronic device (Pu, Abstract, “the trained machine learning knowledge tracing engine”; [0065], “set of one or more predictions based on the computation executed throughout the neural network, including a prediction of a input node corresponding with at least one test item, to a prediction of an output node corresponding with at least one skill mapping the test item, knowledge trace, user interaction with the test item, and/or learner profile”) comprising: a communication module configured to communicate with a terminal; a memory storing an artificial intelligence (AI) model for predicting a user's score and instructions; an artificial intelligence (AI) processor configured to execute the instructions; and a diagnostic test selection unit (Pu, FIG. 1A; Abstract, “the trained machine learning knowledge tracing engine”; [0065], “set of one or more predictions based on the computation executed throughout the neural network, including a prediction of a input node corresponding with at least one test item, to a prediction of an output node corresponding with at least one skill mapping the test item, knowledge trace, user interaction with the test item, and/or learner profile”), wherein the AI processor generates, through the memory, a first matrix indicating whether users answer questions correctly or incorrectly, and wherein the first matrix includes labeled interactions and unlabeled interactions, and each of the labeled interactions has a value of 1 when a user of the users answers a question correctly or a value of 0 when the user of the users answers the question incorrectly, (Pu, [0009], “mapping matrix W can be initialized … can be 0 or 1, representing whether the response is incorrect or correct”; [0081], “For this example, c;=0 or 1, and can represent whether the response is incorrect or correct”), wherein the diagnostic test selection unit uses knowledge tracing (KT) to generate a second matrix (Pu, [0057], “knowledge tracing and/or test item suggestion”; [0081], “expert labeled question item-skill mapping, the mapping matrix W can be initialized using the expert assigned tags, labels, or metadata”; [0007] – [0008], “creating a skill embedding matrix S which represents the latent skills for each learner interaction. Each column of skill embedding matrix S can be a vector representation of one of the latent skills of the interaction xi … for each learner interaction xi in X, a respective question item qi is tagged with a skill tag array sj, and the interaction-skill mapping matrix W is initialized using the skill tag sj), and wherein the AI processor trains the AI model for predicting the user's score using linear regression by inputting the diagnostic test questions as input values to the AI model (Pu, [0075], “This matrix can be updated (thus is trainable). For question/skill mappings which are used once in a sequence, it is likely the system will retain the initialized matrix. For question-skill mappings which are repeated in an interaction sequence, the trainable model will likely update the matrix”; [0066], “The training may include updating the weight value to decrease the loss or error of the network”). Pu does not explicitly disclose selecting, by the diagnostic test selection unit, the diagnostic test questions using Lasso regression based on the second matrix (linear regression). Zelenka et al. (US 20140188442 A1) teaches Systems and methods may automatically generate institution-specific, program-specific or course-specific student risk assessment models from an arbitrary set of potential risk predictors. Student data from previously completed courses are collected and used to create a design matrix of predictor values and an outcome vector (Zelenka, Abstract). Zelenka further teaches selecting, by the diagnostic test selection unit, the diagnostic test questions using Lasso regression based on the second matrix (Zelenka, [0006], “using automated predictor selection method, such as lasso logistic regression”; [0003], “the trainer uses the design matrix and the outcome vector to create a risk model or a set of mathematical equations to assess student risk. The trainer may use an automated feature selection technique to create the model. Automated feature selection techniques include automated logistic regression techniques that include, but are not limited to, lasso logistic regression”). Therefore, in view of Zelenka, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method / system described in Pu, by providing the lasso selection as taught by Zelenka, since Using an automated logistic regression technique provides the benefits of logistic regression without requiring manual fitting and tuning of the model for different institutions, programs, or courses. The benefits of logistic regression include producing risk estimating equations that are easy to deploy in the scorer and to incorporate into other software, providing explanations of the levels of risk, estimating the probability of the event of interest occurring, and estimating relatively quickly even on large data sets (Zelenka, [0033]). Pu does not explicitly disclose each of the unlabeled interactions has no real data; nor disclose generating, by the diagnostic test selection unit, a second matrix based on the first matrix using knowledge tracing (KT), wherein the generating of the second matrix comprises generating the second matrix by using the KT to simulate the unlabeled interactions in the first matrix such that each of the unlabeled interactions has a value of 1 indicating a corresponding question being answered correctly or a value of 0 indicating the corresponding question being answered incorrectly in the second matrix. Zheng et al. (US 2021/0374279 A1) teaches a device may generate a synthetic knowledge graph based on a true knowledge graph, may partition the synthetic knowledge graph into a set of synthetic data partitions, and may determine, using a plurality of teacher models, an aggregated prediction (Zheng, Abstract). Zheng teaches generating, by the AI processor, a first matrix indicating whether users answer-all questions correctly or incorrectly, wherein the first matrix includes labeled interactions and unlabeled interactions, and each of the labeled interactions has a value of 1 when a user of the users answers a question correctly or a value of 0 when the user of the users answers the question incorrectly, and each of the unlabeled interactions has no real data (Zheng, fig. 1A, “adjacency matrix ”; [0028], “a knowledge graph such that each partition includes an unpartitioned adjacency matrix (e.g., from the original knowledge graph that is being partitioned) and includes only the attribute vectors for the nodes that are included in that partition. As a simple example, for a knowledge graph with four nodes (e.g., 1, 2, 3, and 4), attribute partitioning can be used to create a first partition that includes the entire adjacency matrix from the unpartitioned knowledge graph”; [0029]; [0033], “the training may include generation of a prediction for unlabeled data (or labeled data with the prediction being generated prior to analyzing the label, such as for data included in a test set), by a teacher model”; unpartitioned adjacency matrix – first matrix); generating, by the diagnostic test selection unit, a second matrix based on the first matrix using knowledge tracing (KT), wherein the generating of the second matrix comprises generating the second matrix by using the KT to simulate the unlabeled interactions in the first matrix such that each of the unlabeled interactions has a value of 1 indicating a corresponding question being answered correctly or a value of 0 indicating the corresponding question being answered incorrectly in the second matrix (Zheng, fig. 1A, “Synthetic Data Partition 1 - 3”; [0028] – [0029]; [0104], “a synthetic knowledge graph that includes a synthetic adjacency matrix and a synthetic attribute matrix”; synthetic adjacency matrix – second matrix; [0066], “the machine learning algorithm may include a regression algorithm ( e.g., linear regression, logistic regression, and/or the like), which may include a regularized regression algorithm (e.g., Lasso regression, Ridge regression, Elastic-Net regression, and/or the like)”; [0033], “the training may include generation of a prediction for unlabeled data (or labeled data with the prediction being generated prior to analyzing the label, such as for data included in a test set), by a teacher model”;). Therefore, in view of Zheng, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/device described in Pu, by predicting for unlabeled data as taught by Zheng, by receiving a label for data and/or comparing a prediction to the label, a teacher model and/or a student model may determine values and/or positions of those values that are more or less indicative of true data ( or synthetic data), and may update subsequent predictions accordingly ( e.g., by learning from the labeled data) to reduce the classification error (Zheng, [0035]). In view of Zheng, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/device described in Pu, by providing generating synthetic adjacency matrix (a second matrix) as taught by Zheng, by attribute partitioning may lead to more accurate modeling (e.g., more accurate predications and/or statistical analyses) and synthetic data modeling system to generate a differentially private synthetic knowledge graph that permits accurate statistical analyses (Zheng, [0014]). Re claims 4, 10: 4. The method of claim 2, further comprising correcting the AI model for predicting the user’s score by using a time value taken for the user to solve the questions as a weight. 10. The electronic device of claim 8, wherein the AI processor corrects the AI model for predicting the user’s score by using a time value taken for the user to solve the questions as a weight (Pu, [0055], “the interaction result can also depend on the time spent answering the question during the interaction. For example, an interaction which was longer in time (or shorter in time) than an ideal interaction time yet had the correct answer, can have a different interaction result (e.g. a lower percentage score)”; [0120], “the time spent answering the question, whether the system correctly predicted if the learner would answer the question correctly, or not, if the submitted learner response was correct or not, etc. The system can apply NLP engine 108 and/or machine learning engine 109, to score the learner response”; [0128]). Claims 2 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Pu et al. (US 2021/0390873 A1), Zelenka et al. (US 2014/0188442 A1) and Zheng et al. (US 2021/0374279 A1) as applied to claims 1 and 7 above, and further in view of Lan et al. (US 2015/0170536 A1). Re claims 2, 8: Pu does not explicitly disclose a sparse matrix and a dense matrix. Lan teaches a mechanism is disclosed for tracing variation of concept knowledge of learners over time and evaluating content organization of learning resources used by the learners (Lan, Abstract). Lan teaches 2. The method of claim 1, wherein the first matrix is a sparse matrix, and the second matrix is a dense matrix. 8. The electronic device of claim 7, wherein the first matrix is a sparse matrix, and the second matrix is a dense matrix (Lan, fig. 1, figs. 1B, Matrix Y – sparse matrix and Matrix R(t) – dense matrix). Therefore, in view of Lan, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method / device described in Pu, by transforming a matrix as taught by Lan, in order to provide the learner activity summary matrices R(t) based on observed binary-valued ( correct/incorrect) graded learner responses to questions matrix Y and on the learner activity matrices R (Lan, [0054]; [0068]). Claims 5 – 6 and 11 – 12 are rejected under 35 U.S.C. 103 as being unpatentable over Pu et al. (US 2021/0390873 A1), Zelenka et al. (US 2014/0188442 A1) and Zheng et al. (US 2021/0374279 A1) as applied to claims 4 and 10 above, and further in view of Jackman (US 2018/0130368 A1). Re claims 5 – 6 and 11 – 12: Pu does not explicitly disclose an average value of times taken to solve questions for each part of the questions; nor disclose the time value is a ranking value of the user compared to an average value of other users based on the average value. Jackman teaches an invention related to computerized training, and particularly to systems and methods for adaptive training in problem solving (Jackman, Abstract). Jackman teaches 5. The method of claim 4, wherein the time value is an average value of times taken to solve questions for each part of the questions. 11. The electronic device of claim 10, wherein the time value is an average value of times taken to solve questions for each part of the questions. 6. The method of claim 5, further comprising obtaining a ranking value of the user compared to an average value of other users based on the average value of the times. 12. The electronic device of claim 11, wherein the AI processor obtains a ranking value of the user compared to an average value of other users based on the average value of the times (Jackman, [0062], “first three entries, all of which had 100% correct answers, are ranked in order of the average time taken by users 22 to answer, from fastest to slowest”; [0078]; [0060], “server 28 selects questions from general database 36 according to the major solution approach that has the fastest average time, at an implementation question selection step 60”; [0009], “the ranking information is indicative of numbers of correct solutions and times taken by the users to enter the solutions by each of the multiple solution approaches”). Therefore, in view of Jackman, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method / system described in Pu, by the training problems to present to each individual user are selected based on both the general ranking data in the general database and the personal ranking data in the personal database that is assigned to the individual user (Jackman, [0011]) and the lower the effective time is, the better this tool is forecast to be for this specific user (Jackman, [0072]). Response to Arguments Applicant's arguments filed 11/24/2025 have been fully considered but they are not persuasive. Applicant argues: independent claim I to recite the following features: • generating, by the AI processor, a first matrix indicating whether users answer questions correctly or incorrectly, wherein the first matrix includes labeled interactions and unlabeled interactions, and each of the labeled interactions has a value of I when a user of the users answers a question correctly or a value of O when the user of the users answers the question incorrectly, and each of the unlabeled interactions has no real data • generating, by the diagnostic test selection unit, a second matrix based on the first matrix using knowledge tracing (KT), wherein the generating of the second matrix comprises generating the second matrix by using the KT to simulate the unlabeled interactions in the first matrix such that each of the unlabeled interactions has a value of I indicating a corresponding question being answered correctly or a value of 0 indicating the corresponding question being answered incorrectly in the second matrix • selecting, by the diagnostic test selection unit, the diagnostic test questions using Lasso regression based on the second matrix • training, by the AI processor, the AI model for predicting the user's score using linear regression based on the diagnostic test questions The examiner submits that "' [m]achine learning' is a term of art and one definition of it is 'a branch of artificial intelligence in which a computer generates rules underlying or based on raw data that has been fed into it." In other words, the broadest reasonable interpretation of the term 'machine learning' would describe computer software whereby the algorithm that the computer employs may change over time based on data that is fed into the algorithm." Furthermore, manipulation and transformation of matrices (a first and second matrix) has been known before the invention of computer and AI. None of the claimed steps require any type of complexity; the amount of data or level of complexity for matrix transformation. A human can perform the steps and analysis in his/her mind by using pen and paper, even though it takes longer compare to a computer program. The Office has included a new reference: Zheng et al. (US 2021/0374279 A1) to teach the limitations: wherein the first matrix includes labeled interactions and unlabeled interactions … generating, by the diagnostic test selection unit, a second matrix based on the first matrix using knowledge tracing (KT) … Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK YIP whose telephone number is (571)270-5048. The examiner can normally be reached Monday thru Friday; 9:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK YIP/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Oct 25, 2022
Application Filed
Jul 21, 2025
Non-Final Rejection — §101, §103
Nov 24, 2025
Response Filed
Dec 16, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588859
SYSTEM AND METHOD FOR INTERACTING WITH HUMAN BRAIN ACTIVITIES USING EEG-FNIRS NEUROFEEDBACK
2y 5m to grant Granted Mar 31, 2026
Patent 12592160
System and Method for Virtual Learning Environment
2y 5m to grant Granted Mar 31, 2026
Patent 12558290
BLOOD PRESSURE LOWERING TRAINING DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12525140
SYSTEMS AND METHODS FOR PROGRAM TRANSMISSION
2y 5m to grant Granted Jan 13, 2026
Patent 12512012
SYSTEM FOR EVALUATING RADAR VECTORING APTITUDE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
70%
With Interview (+37.6%)
4y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month