DETAILED ACTION
This communication is in response to the Amendments and Arguments filed on 03/17/2026.
Claims 6 and 18 have been canceled by the Applicant.
Claim(s) 1-5, 7-17, and 19-24 are pending and have been examined. Hence, this action has been made FINAL.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/17/2026 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments and Amendments
Amendments to the claims by the Applicant have been considered and addressed below.
With respect to the Allowable Subject Matter, 35 USC § 112, 101, 102, and 103 rejections, the Applicant provides several arguments in which the Examiner will respond accordingly, below.
35 USC § 112(b) rejection(s)
Arguments in page 9 of the Remarks filed on 03/02/2026
Examiner’s Response to Arguments:
Applicant’s arguments and amendments with respect to the 35 U.S.C. § 112(b) have been fully considered and are persuasive. The 35 U.S.C. § 112(b) of independent claims 1 and 13 has been withdrawn.
35 USC § 101 rejection(s)
Arguments in pages 9-13 of the Remarks filed on 03/02/2026
Examiner’s Response to Arguments:
Applicant’s arguments, with respect to the rejection(s) of independent claim(s) 1 and 13 under 35 USC 101 have been fully considered but are not persuasive.
The Applicant argues that:
…A proper reading of claim 1 shows that it cannot be practically performed in the human mind. For example, claim 1 recites processes to orchestrate a plurality of machine learning models to process different aspects of an open activity response, and aggregate the predictions generated by the machine learning models. This cannot be practically performed in the human mind…
…The recited features reflect improvements to how machine learning models operate. The PTAB has held that such improvements integrate an alleged abstract idea into a practical application. For example, in Ex parte Desjardins, Appeal 2024-000567 (PTAB Nov. 4, 2025) (precedential), the Director held that the claims at issue reciting improvements in training a machine learning model integrate an abstract idea into a practical application and are thus patent- eligible. In reaching the conclusion, the Director reasoned: the Specification ... identifies improvements in training the machine learning model itself. ... the claims reflect such an improvement. For example, one improvement identified in the Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." […] the present application recites features that reflect improvements to how machine learning models operate, and thus is patent-eligible…
…The features relate to orchestrating multiple machine learning models to process different aspects of an open activity response, and aggregating the predictions generated by the machine learning models. Using dedicated machine learning models to respectively process different aspects of the open activity response results in improved accuracy and quality of output predictions, because each machine learning model can be configured to specialize in a particular task… […] by providing dedicated machine learning models dedicated to particular types of assessments (e.g., grammar, vocabulary, content), the system may be easily updated to select and use one or more newly available machine learning models in place of previously used machine learning models…
However, the Examiner respectfully disagrees because:
A human is capable of performing actions associated with preknown / predefined set of steps or rules (i.e., models) as will be discussed below.
The Examiner refers the Applicant to MPEP 2106.05(a):
“It is important to note that in order for a method claim to improve computer functionality, the broadest reasonable interpretation of the claim must be limited to computer implementation. That is, a claim whose entire scope can be performed mentally, cannot be said to improve computer technology. Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 120 USPQ2d 1473 (Fed. Cir. 2016) (a method of translating a logic circuit into a hardware component description of a logic circuit was found to be ineligible because the method did not employ a computer and a skilled artisan could perform all the steps mentally). Similarly, a claimed process covering embodiments that can be performed on a computer, as well as embodiments that can be practiced verbally or with a telephone, cannot improve computer technology. See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1328, 122 USPQ2d 1377, 1381 (Fed. Cir. 2017) (process for encoding/decoding facial data using image codes assigned to particular facial features held ineligible because the process did not require a computer).” (Emphasis added)
A human or multiple humans can perform said actions associated with preknown / predefined set of steps or rules (i.e., models) as a mental process and/or certain methods of organizing human activity as will be discussed in more detail below.
Please see detailed analysis below for more details on how the Examiner understands the independent claims do not recite additional elements that integrate the judicial exception into a practical application. Hence, not qualifying as patent eligible subject matter under 35 U.S.C. § 101.
Please refer to MPEP 2106.04(1): Eligibility Step 2A: Whether a Claim is Directed to a Judicial Exception: Prong One.
“Prong One asks does the claim recite an abstract idea, law of nature, or natural phenomenon? In Prong One examiners evaluate whether the claim recites a judicial exception, i.e. whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. While the terms "set forth" and "described" are thus both equated with "recite", their different language is intended to indicate that there are two ways in which an exception can be recited in a claim. For instance, the claims in Diehr, 450 U.S. at 178 n. 2, 179 n.5, 191-92, 209 USPQ at 4-5 (1981), clearly stated a mathematical equation in the repetitively calculating step, and the claims in Mayo, 566 U.S. 66, 75-77, 101 USPQ2d 1961, 1967-68 (2012), clearly stated laws of nature in the wherein clause, such that the claims "set forth" an identifiable judicial exception. Alternatively, the claims in Alice Corp., 573 U.S. at 218, 110 USPQ2d at 1982, described the concept of intermediated settlement without ever explicitly using the words "intermediated" or "settlement."”
“An example of a claim that recites a judicial exception is "A machine comprising elements that operate in accordance with F=ma." This claim sets forth the principle that force equals mass times acceleration (F=ma) and therefore recites a law of nature exception. Because F=ma represents a mathematical formula, the claim could alternatively be considered as reciting an abstract idea. Because this claim recites a judicial exception, it requires further analysis in Prong Two in order to answer the Step 2A inquiry. An example of a claim that merely involves, or is based on, an exception is a claim to "A teeter-totter comprising an elongated member pivotably attached to a base member, having seats and handles attached at opposing sides of the elongated member." This claim is based on the concept of a lever pivoting on a fulcrum, which involves the natural principles of mechanical advantage and the law of the lever. However, this claim does not recite these natural principles and therefore is not directed to a judicial exception (Step 2A: NO). Thus, the claim is eligible at Pathway B without further analysis.”
From this analysis, in Step 2A, Prong One, the Examiner has evaluated the independent claims accordingly and determined that the amended independent claims as drafted indeed describe a judicial exception (i.e., an abstract idea), which represent a mental process (which can be performed by a human with pen and paper).
More specifically, similar to what was discussed in the Non-Final Rejection mailed on 10/01/2025:
The limitations of independent claims 1 and 13, as drafted cover a human (mental process and/or certain methods of organizing human activity).
More specifically, the independent claim(s) recite(s):
1. A method for dynamic open activity response assessment, the method comprising:
receiving, by an electronic processor via a network, an open activity response from a client device of a user;
in response to the open activity response, providing, by the electronic processor, the open activity response to a plurality of machine learning models to process a plurality of open response assessments in real time, the plurality of machine learning models corresponding to the plurality of open response assessments, a first open response assessment of the plurality of open response assessments being agnostic with respect to a second open response assessment of the plurality of open response assessments;
receiving, by the electronic processor, a plurality of assessment scores from the plurality of machine learning models, the plurality of assessment scores corresponding to the plurality of open response assessments; and
providing, by the electronic processor, a plurality of assessment results to the client device of the user based on the plurality of assessment scores corresponding to the plurality of open response assessments associated with the open activity response; [[.]]
wherein a first assessment score of the plurality of assessment scores is received from a first machine learning model of the plurality of machine learning models, the first assessment score being indicative of a first confidence score about how close the open activity response is to a content objective,
wherein a second assessment score of the plurality of assessment scores is received from a second machine learning model of the plurality of machine learning models, the second assessment score being indicative of a second confidence score about how many words in the open activity response are close to a list of predetermined words,
wherein a third assessment score of the plurality of assessment scores is received from a third machine learning model of the plurality of machine learning models, the third assessment
score being indicative of a third confidence score about how close a following sentence subsequent to a previous sentence in the open activity response is close to a predicted sentence, and
wherein a fourth assessment score of the plurality of assessment scores is received from a fourth machine learning model of the plurality of machine learning models, the fourth assessment score being indicative of a fourth confidence score about how a grammar structure of the open activity response is close to a grammar learning objective.
13. A system for dynamic open activity response assessment, comprising:
a memory; and
an electronic processor coupled with the memory,
wherein the processor is configured to:
[perform the limitations as in claim 1, above.]
This reads on a human (e.g., mentally and/or using pen and paper):
Receiving a response (e.g., verbally or written) from another human;
Analyzing/assessing said response by follow multiple predefined set of steps or rules (i.e., models);
Assigning a plurality of scores to the response after being analyzed by the plurality of predefined set of steps or rules (i.e., models);
Writing down the scores/results for display to the other human;
Wherein a first assessment is performed following a first predefined set of steps/rules to get a score (e.g., determining how close to content objective is the content);
Wherein a second assessment is performed following a second predefined set of steps/rules to get a score (e.g., determining how close to the list of predetermined words are the words);
Wherein the discourse assessment is performed following a third predefined set of steps/rules to get a score (e.g., determining how close is a sentence is to a predicted sentence);
Wherein the grammar assessment is performed following a fourth predefined set of steps/rules to get a score (e.g., determining how close the grammar is to grammar objective).
Please also refer to MPEP 2106.05(f)(2): Whether the claim invokes computers or other machinery merely as a tool to perform an existing process, and MPEP 2106.06(b): Clear Improvement to a Technology or to Computer Functionality.
Please refer to MPEP 2106.04(2): Eligibility Step 2A: Whether a Claim is Directed to a Judicial Exception: Prong Two.
“Prong Two asks does the claim recite additional elements that integrate the judicial exception into a practical application? In Prong Two, examiners evaluate whether the claim as a whole integrates the exception into a practical application of that exception. If the additional elements in the claim integrate the recited exception into a practical application of the exception, then the claim is not directed to the judicial exception (Step 2A: NO) and thus is eligible at Pathway B. This concludes the eligibility analysis. If, however, the additional elements do not integrate the exception into a practical application, then the claim is directed to the recited judicial exception (Step 2A: YES), and requires further analysis under Step 2B (where it may still be eligible if it amounts to an ‘‘inventive concept’’). For more information on how to evaluate whether a judicial exception is integrated into a practical application, see MPEP § 2106.04(d)(2).”
From this analysis, in Step 2A, Prong Two, the Examiner has evaluated the independent claims accordingly and determined that the amended independent claims as drafted that the claims as a whole do not include additional elements that integrate the exception into a practical application of that exception. (i.e., an abstract idea). Similar to what was discussed in the Non-Final Rejection mailed on 10/01/2025:
This judicial exception is not integrated into a practical application because for example: claim 1 recites “electronic processor”, “network”, “client device”, and “first/second/third/fourth machine learning models” while claim 13 further recites “memory” and “processor”. As an example, in ¶ [0039-0040] of the as filed specification, it is disclosed that “In some examples, the computing system 200 may include one or more storage subsystems 210, including hardware and software components used for storing data and program instructions, such as system memory 218 and computer-readable storage media 216. In some examples, the system memory 218 and/or the computer-readable storage media 216 may store and/or include program instructions that are loadable and executable on the processor(s) 204. […] In some examples, the system memory 218 may be stored in volatile memory (e.g., random-access memory (RAM) 212, including static random-access memory (SRAM) or dynamic random-access memory (DRAM)). In an example, the RAM212 may contain data and/or program modules that are immediately accessible to and/or operated and executed by the processing circuitry 204…” Therefore, a general-purpose computer or computing device is described and mainly used as an application thereof. Accordingly, these additional elements do not integrate the abstract idea into a practical idea because it does not impose any meaningful limits on practicing the abstract idea.
Please also refer to MPEP 2106.05(f)(2): Whether the claim invokes computers or other machinery merely as a tool to perform an existing process.
Finally, please refer to MPEP 2106.05(A): Relevant Considerations For Evaluating Whether Additional Elements Amount To An Inventive Concept
“Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include:
i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 573 U.S. at 225-26, 110 USPQ2d at 1984 (see MPEP § 2106.05(f));
ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d));”
From this analysis, in Step 2B, the Examiner has evaluated the independent claims accordingly and determined that the independent claims as drafted have limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception. Similar to what was discussed in the Non-Final Rejection mailed on 10/01/2025:
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is listed as a general computing device as noted. The claim is not patent eligible.
In summary, the Examiner respectfully disagrees with the arguments above.
For more details, please refer to updated 35 U.S.C. § 101 rejections for claims 1 and 13, below.
35 USC § 102/103 rejection(s)
Arguments in pages 13-14 of the Remarks filed on 03/02/2026
Examiner’s Response to Arguments:
Applicant’s arguments and amendments with respect to the 35 U.S.C. § 102/103 have been fully considered and are persuasive. The 35 U.S.C. § 102/103 of independent claims 1 and 13 has been withdrawn.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-24 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. More specifically directed to the abstract idea grouping of: mental process and/or certain methods of organizing human activity.
The independent claim(s) recite(s):
1. A method for dynamic open activity response assessment, the method comprising:
receiving, by an electronic processor via a network, an open activity response from a client device of a user;
in response to the open activity response, providing, by the electronic processor, the open activity response to a plurality of machine learning models to process a plurality of open response assessments in real time, the plurality of machine learning models corresponding to the plurality of open response assessments, a first open response assessment of the plurality of open response assessments being agnostic with respect to a second open response assessment of the plurality of open response assessments;
receiving, by the electronic processor, a plurality of assessment scores from the plurality of machine learning models, the plurality of assessment scores corresponding to the plurality of open response assessments; and
providing, by the electronic processor, a plurality of assessment results to the client device of the user based on the plurality of assessment scores corresponding to the plurality of open response assessments associated with the open activity response; [[.]]
wherein a first assessment score of the plurality of assessment scores is received from a first machine learning model of the plurality of machine learning models, the first assessment score being indicative of a first confidence score about how close the open activity response is to a content objective,
wherein a second assessment score of the plurality of assessment scores is received from a second machine learning model of the plurality of machine learning models, the second assessment score being indicative of a second confidence score about how many words in the open activity response are close to a list of predetermined words,
wherein a third assessment score of the plurality of assessment scores is received from a third machine learning model of the plurality of machine learning models, the third assessment
score being indicative of a third confidence score about how close a following sentence subsequent to a previous sentence in the open activity response is close to a predicted sentence, and
wherein a fourth assessment score of the plurality of assessment scores is received from a fourth machine learning model of the plurality of machine learning models, the fourth assessment score being indicative of a fourth confidence score about how a grammar structure of the open activity response is close to a grammar learning objective.
13. A system for dynamic open activity response assessment, comprising:
a memory; and
an electronic processor coupled with the memory,
wherein the processor is configured to:
[perform the limitations as in claim 1, above.]
This reads on a human (e.g., mentally and/or using pen and paper):
Receiving a response (e.g., verbally or written) from another human;
Analyzing/assessing said response by follow multiple predefined set of steps or rules (i.e., models);
Assigning a plurality of scores to the response after being analyzed by the plurality of predefined set of steps or rules (i.e., models);
Writing down the scores/results for display to the other human;
Wherein a first assessment is performed following a first predefined set of steps/rules to get a score (e.g., determining how close to content objective is the content);
Wherein a second assessment is performed following a second predefined set of steps/rules to get a score (e.g., determining how close to the list of predetermined words are the words);
Wherein the discourse assessment is performed following a third predefined set of steps/rules to get a score (e.g., determining how close is a sentence is to a predicted sentence);
Wherein the grammar assessment is performed following a fourth predefined set of steps/rules to get a score (e.g., determining how close the grammar is to grammar objective).
This judicial exception is not integrated into a practical application because for example: claim 1 recites “electronic processor”, “network”, “client device”, and “first/second/third/fourth machine learning models” while claim 13 further recites “memory” and “processor”. As an example, in ¶ [0039-0040] of the as filed specification, it is disclosed that “In some examples, the computing system 200 may include one or more storage subsystems 210, including hardware and software components used for storing data and program instructions, such as system memory 218 and computer-readable storage media 216. In some examples, the system memory 218 and/or the computer-readable storage media 216 may store and/or include program instructions that are loadable and executable on the processor(s) 204. […] In some examples, the system memory 218 may be stored in volatile memory (e.g., random-access memory (RAM) 212, including static random-access memory (SRAM) or dynamic random-access memory (DRAM)). In an example, the RAM212 may contain data and/or program modules that are immediately accessible to and/or operated and executed by the processing circuitry 204…” Therefore, a general-purpose computer or computing device is described and mainly used as an application thereof. Accordingly, these additional elements do not integrate the abstract idea into a practical idea because it does not impose any meaningful limits on practicing the abstract idea.
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is listed as a general computing device as noted. The claim is not patent eligible.
With respect to claims 2 and 14, the claim(s) recite:
2/14. The method/system of claims 1/13, wherein the open activity response comprises a written response.
This reads on a human (e.g., mentally and/or using pen and paper):
The received response is a written response from another human.
No additional limitations are present.
With respect to claims 3 and 15, the claim(s) recite:
3/15. The method/system of claims 2/14, wherein the plurality of open response assessments comprises at least one of: a content assessment, a vocabulary assessment, a discourse assessment, a grammar assessment, or a speaking assessment [only claim 3 – speaking assessment alternative not present in claim 15].
This reads on a human (e.g., mentally and/or using pen and paper):
Analyzing/assessing the received response by following multiple predefined set of steps or rules (i.e., models) comprise at least assessing for: content, vocabulary, discourse, grammar, or speech.
No additional limitations are present.
With respect to claims 4 and 16, the claim(s) recite:
4/16. The method/system of claims 3/15, wherein the content assessment is configured to be processed based on a first machine learning model of the plurality of machine learning models,
wherein the vocabulary assessment is configured to be processed based on a second machine learning model of the plurality of machine learning models,
wherein the discourse assessment is configured to be processed based on a third machine learning model of the plurality of machine learning models, and
wherein the grammar assessment is configured to be processed based on a fourth machine learning model of the plurality of machine learning models.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein the content assessment is performed following a first predefined set of steps/rules;
Wherein the vocabulary assessment is performed following a second predefined set of steps/rules;
Wherein the discourse assessment is performed following a third predefined set of steps/rules;
Wherein the grammar assessment is performed following a fourth predefined set of steps/rules.
No additional limitations are present.
With respect to claims 5 and 17, the claim(s) recite:
5/17. The method/system of claims 4/16, wherein the first machine learning model comprises a neural network-based language model,
wherein the second learning model comprises a classifier model,
wherein the third learning model comprises a transformer model, and
wherein the fourth machine learning model comprises a dependency matcher model.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein the first predefined set of steps/rules comprises a first set of steps/rules (e.g., pre-known mathematical steps/rules);
Wherein the second predefined set of steps/rules comprises a second set of steps/rules (e.g., pre-known mathematical steps/rules);
Wherein the third predefined set of steps/rules comprises a third set of steps/rules (e.g., pre-known mathematical steps/rules);
Wherein the fourth predefined set of steps/rules comprises a fourth set of steps/rules (e.g., pre-known mathematical steps/rules);
No additional limitations are present.
With respect to claims 7 and 19, the claim(s) recite:
7/19. The method/system of claims 6/18, further comprising:
in response to the open activity response, providing a plurality of metadata of the open activity response to the plurality of machine learning models, the plurality of metadata corresponding to the plurality of machine learning models,
wherein a first metadata of the plurality of metadata for the first machine learning model comprises the content objective,
wherein a second metadata of the plurality of metadata for the second machine learning model comprises the list of the predetermined words,
wherein a third metadata of the plurality of metadata for the third machine learning model comprises the predicted sentence, and
wherein a fourth metadata of the plurality of metadata for the fourth machine learning model comprises the grammar learning objective.
This reads on a human (e.g., mentally and/or using pen and paper):
Using pre-known plurality of data along with the plurality of the predefined set of steps/rules
wherein first data comprises content objective;
wherein second data comprises predetermined words;
wherein third data comprises predicted sentences;
wherein fourth data comprises grammar learning objectives.
No additional limitations are present.
With respect to claims 8 and 20, the claim(s) recite:
8/20. The method/system of claims 2/14, wherein the open activity response further comprises a spoken response, and
wherein the written response is a transcribed response of the spoken response.
This reads on a human (e.g., mentally and/or using pen and paper):
The received response is a spoken response from another human along with the written response (i.e., transcription of spoken response).
No additional limitations are present.
With respect to claims 9 and 21, the claim(s) recite:
9/21. The method/system of claims 8/21, wherein the plurality of open response assessments further comprises a speaking assessment configured to be processed based on a fifth machine learning model of the plurality of machine learning models.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein the speech assessment is performed following a fourth predefined set of steps/rules.
No additional limitations are present.
With respect to claims 10 and 22, the claim(s) recite:
10/22. The method/system of claims 9/20, wherein a fifth assessment score is received from a fifth machine learning model of the plurality of machine learning models, the fifth assessment score being indicative of a fifth confidence score about how a pronunciation and fluency of the open activity response is close to a speaking objective.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein the speech assessment is performed following a fifth predefined set of steps/rules to get a score (e.g., determining how close the pronunciation/fluency is to speaking objective).
No additional limitations are present.
With respect to claims 11 and 23, the claim(s) recite:
11/23. The method/system of claims 1/13, wherein the open activity response is produced during a conversation between an agent and the user.
This reads on a human (e.g., mentally and/or using pen and paper):
Wherein the conversation is performed between two humans,
No additional limitations are present.
With respect to claims 12 and 24, the claim(s) recite:
12/24. The method/system of claims 11/23, wherein the agent comprises a conversational computing agent comprising a program designed to process the conversation with the user.
This reads on a human (e.g., mentally and/or using pen and paper):
Analyzing conversation with another human.
There is additional limitation of “conversational computing agent” present. Similar analysis as provided for the independent claims above still apply.
Allowable Subject Matter
Claims1-5, 7-17, and 19-24 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims as well as if rewritten to overcome any rejections as set forth in this Office Action (i.e., 35 USC 101).
The following is a statement of reasons for the indication of allowable subject matter:
Regarding independent claims 1 and 13, the closest prior art of record Noble et al. (US 20200410024 A1) teaches all of the limitations as mapped in the independent claims, above. More specifically, Noble et al. teaches “…automatically generating personalized content that is aimed at enabling a user (such as a student) to prepare for a challenge—such as an exam, assessment, final, or evaluation...” as disclosed in ¶ [0038] as well as using a plurality of models for assessments as disclosed in ¶ [0094].
However, none of the cited Prior arts alone or in combination disclose the claim limitations as drafted.
1/13. (Currently Amended) A method/system for dynamic open activity response assessment, the method comprising:
receiving, by an electronic processor via a network, an open activity response from a client device of a user;
in response to the open activity response, providing, by the electronic processor, the open activity response to a plurality of machine learning models to process a plurality of open response assessments in real time, the plurality of machine learning models corresponding to the plurality of open response assessments, a first open response assessment of the plurality of open response assessments being agnostic with respect to a second open response assessment of the plurality of open response assessments;
receiving, by the electronic processor, a plurality of assessment scores from the plurality of machine learning models, the plurality of assessment scores corresponding to the plurality of open response assessments; and
providing, by the electronic processor, a plurality of assessment results to the client device of the user based on the plurality of assessment scores corresponding to the plurality of open response assessments associated with the open activity response [[.]],
wherein a first assessment score of the plurality of assessment scores is received from a first machine learning model of the plurality of machine learning models, the first assessment score being indicative of a first confidence score about how close the open activity response is to a content objective,
wherein a second assessment score of the plurality of assessment scores is received from a second machine learning model of the plurality of machine learning models, the second assessment score being indicative of a second confidence score about how many words in the open activity response are close to a list of predetermined words,
wherein a third assessment score of the plurality of assessment scores is received from a third machine learning model of the plurality of machine learning models, the third assessment
score being indicative of a third confidence score about how close a following sentence subsequent to a previous sentence in the open activity response is close to a predicted sentence, and
wherein a fourth assessment score of the plurality of assessment scores is received from a fourth machine learning model of the plurality of machine learning models, the fourth assessment score being indicative of a fourth confidence score about how a grammar structure of the open activity response is close to a grammar learning objective.
Hence, dependent claims 2-5, 7-12, 14-17, and 19-24 would also be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims as well as if rewritten to overcome any rejections as set forth in this Office Action (i.e., 35 USC 112(b) and 101).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Keisha Y Castillo-Torres whose telephone number is (571)272-3975. The examiner can normally be reached Monday - Friday, 9:00 am - 4:00 pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Keisha Y. Castillo-Torres
Examiner
Art Unit 2659
/Keisha Y. Castillo-Torres/Examiner, Art Unit 2659
/PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659