DETAILED ACTION
1. This office action is in response to the Application No. 18778094 filed on 11/26/2025. Claims 1-20 are presented for examination and are currently pending. Applicant’s arguments have been carefully and respectfully considered.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examination Under 37 CFR 1.114
3. A request for continued examination under 37 CFR 1.114, including the fee set
forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this
application is eligible for continued examination under 37 CFR 1.114, and the fee set
forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action
has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on
11/26/2025 has been entered.
Response to Arguments
4. The claim amendment of 11/26/2025 of has overcome the 112(b) rejection of 08/26/2025, as a result the 112(b) rejection has been withdrawn.
The Applicant’s argument on page 7-8 of the remarks has been considered but
moot because Bruni et al. (US11538351 filed 01/14/2019), a new secondary reference has been applied to the newly added limitations “wherein the first representation dataset comprises a plurality of evaluation metrics which includes at least a quantitative measurement used to assess a cognitive function … wherein an excitation element comprises a system recommendation comprising at least a neurocognitive exercise to increase performance in a cognitive or behavioral domain and a recommendation in a work or career domain” of claim 1.
On page 9 of the remarks, the Applicant argued that “Claim 11 as amended recites similar limitations to claim 1. As noted above, claim 1 is patentably distinguishable over Jain, Chen and Argast, alone or in combination, for at least the reasons discussed above. Accordingly, Applicant respectfully submits that claim 11 as amended is patentably distinguishable over Jain, Chen and Argast, alone or in combination, for at least the reasons discussed above. Therefore, Applicant respectfully requests the withdrawal of this rejection”.
It is noted that claim 1 is similar to claim 11, the same reasoning applies to claim 11.
On page 9 of the remarks, the Applicant argued that “Each of claims 2-3, " 8, 12-13, 16 and 18 depends, directly or indirectly, from claim 1 or 11. As noted above, claims 1 and 11 are patentably distinguishable over Jain, Chen and Argast, alone or in combination, for at least the reasons discussed above. Accordingly, Applicant respectfully submits that claims 2-3, 6, 8, 12-13, 16 and 18 are patentably distinguishable over Jain, Chen and Argast, alone or in combination, for at least the reasons discussed above. Therefore, Applicant respectfully requests the withdrawal of these rejections”.
It is noted that dependent claims 2-3, 8, 12-13, 16 and 18, which depend directly or indirectly from claims 1 and 11 are not allowable for similar reasons argued above regarding claim 1.
On page 9 of the remarks, the Applicant argued that “Each of claims 4, 5, 7, 9, 10, 14, 15, 17, 19 and 20 depends, directly or indirectly, from claim 1 or 11. As noted above, claims 1 and 11 are patentably distinguishable over Jain, Chen and Argast, alone or in combination, for at least the reasons discussed above. Accordingly, Applicant respectfully submits that claims 4, 5, 7, 9, 10, 14, 15, 17, 19 and 20 are patentably distinguishable over Jain, Chen and Argast, alone or in combination, for at least the reasons discussed above”.
It is noted that dependent claims 4, 5, 7, 9, 10, 14, 15, 17, 19 and 20, which depend directly or indirectly from claims 1 and 11 are not allowable for similar reasons argued above regarding claim 1.
On page 9-10 of the remarks, the Applicant argued that “Kelly fails to cure the deficiencies of Jain, Chen and Argast. The Office has not asserted that Kelly teaches, suggests, or motivates “a plurality of evaluation metrics which includes at least a quantitative measurement used to assess a cognitive function,” and “wherein an excitation element comprises a system recommendation comprising at least a neurocognitive exercise to increase performance in a cognitive or behavioral domain and a recommendation in a work or career domain,” as in claim 1. Applicant respectfully submits that Kelly does not teach, suggest, or motivate “a plurality of evaluation metrics which includes at least a quantitative measurement used to assess a cognitive function,” and “wherein an excitation element comprises a system recommendation comprising at least a neurocognitive exercise to increase performance in a cognitive or behavioral domain and a recommendation in a work or career domain,” as in claim 1. Accordingly, Applicant respectfully submits that claims 4, 5, 7, 9, 10, 14, 15, 17, 19 and 20 are patentably distinguishable over Jain, Chen, Kelly and Argast, alone or in combination, for at least the reasons discussed above. Therefore, Applicant respectfully requests the withdrawal of these rejections”.
It is noted that Bruni et al. (US11538351 filed 01/14/2019), a new secondary reference has been applied to the newly added limitations “wherein the first representation dataset comprises a plurality of evaluation metrics which includes at least a quantitative measurement used to assess a cognitive function … wherein an excitation element comprises a system recommendation comprising at least a neurocognitive exercise to increase performance in a cognitive or behavioral domain and a recommendation in a work or career domain” of claim 1.
Furthermore, Jain modified by Chen, Kelly and Bruni now teaches claims 4, 5, 7, 9, 10, 14, 15, 17, 19 and 20.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claims 1-3, 6, 8, 11-13, 16 and 18 are rejected under 35 U.S.C. 103 as
being unpatentable over Jain et al. (US12111747 filed 05/10/2024) in view of Chen et
al. (US20250077792 filed 08/31/2023) and further in view of Bruni et al. (US11538351 filed 01/14/2019)
Regarding claim 1, Jain teaches an apparatus (FIG. 2 shows a block diagram showing some of the components typically incorporated in at least some of the computer systems and other devices on which the disclosed system operates in accordance with some implementations of the present technology, col. 1, ln 50-54) for training an excitation model (For example, an LLM is configured or trained using reinforcement learning from human feedback (RLHF), instruction tuning (col. 21, ln 51-54); For example, the data generation platform can redirect a prompt to a second LLM (e.g., distinct from the first LLM (col. 29, ln 18-20); Based on particular model architectures and training data used to generate or tune LLMs (col. 2, ln 58-59). The Examiner notes the second LLM is an excitation model),
wherein the apparatus comprises: at least a processor; memory communicatively connected to the at least a processor (processor(s) 208, including a CPU for executing computer programs, a GPU for executing computer graphic programs and handling computing graphical elements; storage(s) 210, including at least one computer memory for storing programs (e.g., application(s) 212, model (s) 214, and other programs) and data while they are being used, col. 12, ln 14-20),
wherein the memory contains instructions configuring the at least a processor to: instantiate a representation generator (generation instances associated with the system, col. 30, ln 4-5);
collect a first dataset from a system (For example, the data generation platform 102 obtains, from a first database, a plurality of training prompts and respective
performance metric values associated with providing respective training prompts to the first LLM, (col. 36, ln 21-24); For example, the data generation platform 102 receives inputs such as unstructured data, including text data, such as a prompt (col. 9, ln 37-40); An event database can include data associated with events
relating to the data generation platform 102, col. 9, ln 10-11);
generate, using the representation generator and the first dataset, a first representation dataset (The data generation platform 102 can generate a first vector
representation for the expected test output (col. 49, ln 31-32); the data generation platform can provide the prompt to the selected model (e.g., LLM) for generation of the requested output, (col. 6, 7-9); output generation request with a first performance criterion associated with the first LLM of a plurality of LLMs (col. 26, ln 8-9); machine learning models (e.g., LLMs), col. 31, ln 8. The Examiner notes the output is the generated first representation dataset and the first LLM is a representation generator which is a machine learning model),
wherein the first representation dataset comprises a plurality of evaluation metrics (The data generation platform can determine estimated performance metric values associated with generating the output, col. 6, ln 14-16)
output one or more excitation elements from the excitation model (the data generation platform can determine another model (e.g., a second LLM) for generation of the output, col. 6, ln 12-14. The Examiner notes the second LLM is the excitation model and the output generated is an excitation element) using the first representation dataset (For example, the data generation platform can redirect a prompt to a second LLM (col. 29, ln 18-20); natural language output e.g., prompts …, col. 9, ln 13-14),
transmit the excitation element to the system (the deployment database can include a server system (e.g., physical or virtual) that stores validated outputs or results from one or more LLMs (col. 10, ln 7-9); In response to validating the generated output, the data generation platform can transmit this information to an associated data store or deployment system, col. 4, ln 46-49);
collect a second dataset from the system (For example, the data generation platform 102 obtains, from a first database (col. 36, ln 21-22); A deployment database can include data associated with deploying, using, or viewing results associated with the data generation platform 102, col. 10, ln 5-7);
generate an error signal (The data generation platform 102 can provide the indication of the validation error, col. 51, ln 31-33) as a function of the second dataset and the first representation dataset (For example, the data generation platform 102 can generate a second code sample using a generative model that cures determined validation errors (e.g., by including an indication of the appropriate validation errors within a prompt of the LLM, col. 45, ln 66-67 to col. 46 ln 1-4);
modify the representation generator using the error signal (Based on such a determination, the data generation platform 102 can transmit the validation indicator 1222 (e.g., including indications of associated validation errors) to an LLM (e.g., the first model) for modification (col. 45, ln 62-66); For example, the access control engine 114 modifies or changes the LLM for execution of the prompt associated with the output generation request based on the user identifier, the attribute, and/or the performance evaluation 408, col. 16, ln 43-47),
wherein modifying the representation generator is configured to generate at least a modified evaluation metric (In some implementations, the data generation platform 102 can generate the modified output by providing an indication of a validation error (e.g., associated with the validation indicator) to an LLM (col. 51, ln 23-26); The data generation platform can determine estimated performance metric values associated with generating the output, col. 6, ln 14-16) comprising
a reprioritization of the one or more excitation elements (Accordingly, the data
generation platform 102 enables the prioritization of relevant performance metrics (e.g., cost) over other metrics (e.g., memory usage) according to system requirements (col. 31, ln 43-46); the data generation platform 102 can determine the composite performance metric value based on weights that correspond to
the order of priority (col. 21, ln 24-27); modify the output (e.g., by resubmitting the
output to the LLM) to modify the sentiment associated with the output, col. 27, ln 5-7);
output a second representation dataset (The data generation platform 102 can generate … a second vector representation for the test output (col. 49 ln 31-34); The LLM can generate an output based on the query and the retrieved documents col. 21, ln 66-7 to col. 22 ln 1) using the modified representation generator (In some implementations, the model (e.g., LLM) includes augmented or modified LLMs, such as retrieval-augmented generation (RAG) algorithms, col. 21, ln 57-59); and
output a second excitation element from the excitation model (the process 1100 can generate a second output by providing the prompt to the second model (col. 37, ln 54-55); the platform can generate the output using the second LLM and transmit the output to a computing system, col. 6 ln 27-30) using the second representation dataset (The data generation platform 102 can generate … a second vector representation for the test output (col. 49 ln 31-34)).
Jain does not explicitly teach wherein the first representation dataset comprises a plurality of evaluation metrics which includes at least a quantitative measurement used to assess a cognitive function; wherein an excitation element comprises a system recommendation comprising at least a neurocognitive exercise to increase performance in a cognitive or behavioral domain and a recommendation in a work or career domain; tune the representation generator iteratively using the error signal to correct a deficiency of the system;
Chen teaches tune the representation generator iteratively using the error signal to correct a deficiency of the system (When fine-tuning the pretrained machine learning model 408 using the adaptation component 420, during each training iteration (or fine-tuning iteration), the set of low-rank matrices of the adaptation component 420 are updated based on the error signal 412 determined from the predicted output 406 and the pseudo label 418 [0074]);
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Jain to incorporate the teachings of Chen for the benefit of fine-tuning a machine learning model [0011] to perform one or more domain-specific tasks (Chen [0025])
Modified Jain does not explicitly teach wherein the first representation dataset comprises a plurality of evaluation metrics which includes at least a quantitative measurement used to assess a cognitive function; wherein an excitation element comprises a system recommendation comprising at least a neurocognitive exercise to increase performance in a cognitive or behavioral domain and a recommendation in a work or career domain;
Bruni teaches a first representation dataset (Referring to FIGS. 1 and 2 , the cognitive assessment input system (CAIS) 120 generally comprises … an interaction data acquisition module 140, a cognitive indicators and work pattern analysis module 160, col. 10, lines 44-48),
wherein the first representation dataset comprises a plurality of evaluation metrics (Over time, and through observational learning studies with self-report or other indicators (e.g., physiological metrics) of workload, fatigue, and attentional focus, the way these interface signals are used in cognitive calculations can be revised to more accurately indicate cognitive levels for specific users or user roles, col. 10, lines 38-43)
which includes at least a quantitative measurement used to assess a cognitive function (By accurately and quantitatively measuring the cognitive state of the human, col. 3, lines 39-41);
output one or more excitation elements (The work pattern identification module 466 generally identifies a work pattern (short-term, actions-based behavior) based on data including relevant, formatted interaction data (col. 13, lines 26-29). The Examiner notes the output is work pattern (short-term, actions-based behavior)) from the excitation model using the first representation dataset (As shown, a machine learning module at 469 may be included to take data gained from modules such as the cognitive indicators computation module, the cognitive indicators comparison module, the work pattern identification module, the work pattern comparison module, col. 15, lines 15-19),
wherein an excitation element comprises a system recommendation comprising at least a neurocognitive exercise to increase performance in a cognitive or behavioral domain (A particularly helpful parameter to learn is attentional focus. This may be measured by dwell time (col., 15, lines 53-54); With these example implementations, higher calculated values will still indicate increased attentional focus, but these measurements will be normalized to accurately describe individual users, col. 16, lines 8-11) and a recommendation in a work or career domain (In this way, dwell time may indicate attentional focus; increased dwell time may suggest increased levels of focus, col. 10, lines 20-23);
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Jain to incorporate the teachings of Bruni for the benefit of using machine learning to accurately estimate an individuals' attention span (Bruni, col. 16, lines1-4)
Regarding claim 2, Modified Jain teaches the apparatus of claim 1, Jain teaches wherein the representation generator comprises a machine learning model (the data generation platform can provide the prompt to the selected model (e.g., LLM) for generation of the requested output, (col. 6, 7-9); output generation request with a first performance criterion associated with the first LLM of a plurality of LLMs (col. 26, ln 8-9); machine learning models (e.g., LLMs), col. 31, ln 8. The Examiner notes the first LLM is a representation generator which is a machine learning model).
Regarding claim 3, Modified Jain teaches the apparatus of claim 2, Jain teaches wherein the machine learning model is configured to: identify a focus area for the system to optimize (In some implementations, the data generation platform 102 can receive user input that selects particular performance metrics (e.g., in an order of priority) for determination of corresponding values. For example, the data generation platform 102 can determine the composite performance metric value based on weights that correspond to the order of priority, col. 21, ln 21-27);
prioritize the focus area (Accordingly, the data generation platform 102 enables the prioritization of relevant performance metrics (e.g., cost) over other metrics (e.g., memory usage) according to system requirements (col. 31, ln 43-46); and
generate the representation dataset that further examines the focus area (modify the output (e.g., by resubmitting the output to the LLM) to modify the sentiment associated with the output, col. 27, ln 5-7); For example, the data generation platform 102 can determine the composite performance metric value based on weights that correspond to the order of priority, col. 21, ln 21-27)
Regarding claim 6, Modified Jain teaches the apparatus of claim 1, Jain teaches wherein the excitation model comprises a large language model (For example, the data generation platform can redirect a prompt to a second LLM (e.g., distinct from the first LLM (col. 29, ln 18-20)).
Regarding claim 8, Modified Jain teaches the apparatus of claim 1, Jain teaches wherein the excitation model further comprises a neural network (An LLM can include an artificial neural network, col. 21 ln 48-49).
Regarding claim 11, claim 11 is similar to claim 1. It is rejected in the same manner and reasoning applying.
Regarding claim 12, claim 12 is similar to claim 2. It is rejected in the same manner and reasoning applying.
Regarding claim 13, claim 13 is similar to claim 3. It is rejected in the same manner and reasoning applying.
Regarding claim 16, claim 16 is similar to claim 6. It is rejected in the same manner and reasoning applying.
Regarding claim 18, claim 18 is similar to claim 8. It is rejected in the same manner and reasoning applying.
6. Claims 4, 5, 7, 9, 10, 14, 15, 17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jain et al. (US12111747 filed 05/10/2024) in view of Chen et
al. (US20250077792 filed 08/31/2023) in view of Kelly et al. (US20240289560 filed 02/27/2024) and further in view of Bruni et al. (US11538351 filed 01/14/2019)
Regarding claim 4, Modified Jain teaches the apparatus of claim 2, Kelly teaches wherein the machine learning model is iteratively trained on a plurality of datasets as a function of the representation dataset (In some examples, the machine learning classifier model 410 may be trained, using one or more supervisory training techniques (e.g., backpropagation of errors, etc.) to assign a contextual classification based on a plurality of historical contextual classifications respectively assigned to a plurality of historical text documents [0119]. The Examiner notes backpropagation which involves iteratively tuning their internal parameters to minimize prediction error).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Jain, Chen and Bruni to incorporate the teachings of Kelly for the benefit of the Large Language Model (LLM) which may be iteratively retrained to continuously and automatically improve without a human in the loop (Kelly [0091]).
Regarding claim 5, Modified Jain teaches the apparatus of claim 1, Kelly teaches wherein collecting the first dataset comprises receiving information from a generative data model (In some embodiments, an initial document subset 402 is identified, from a document data store 404, and for a generative text request 406 [0098]; For example, a generative text request may be generated and/or provided to a backend service, such as a generative service [0148]; In some examples, the generative service 606 may include a third-party service, such as OpenAI [0158]. The Examiner notes initial document subset 402 as first dataset receives generative text request 406, (Fig. 4) from generative data model such OpenAI).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Jain to incorporate the teachings of Kelly for the benefit of the Large Language Model (LLM) which may be iteratively retrained to continuously and automatically improve without a human in the loop (Kelly [0091]).
Regarding claim 7, Modified Jain teaches the apparatus of claim 6, Kelly teaches wherein the large language model comprises a generative pretrained transformer (The LLM 420 may include any type of LLM, such as a generative pre-trained transformer, and/or the like [0133]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Jain to incorporate the teachings of Kelly for the benefit of the Large Language Model (LLM) which may be iteratively retrained to continuously and automatically improve without a human in the loop (Kelly [0091]).
Regarding claim 9, Modified Jain teaches the apparatus of claim 1, Kelly teaches wherein the excitation element is presented to the system through a graphical user interface, wherein the graphical user interface is configured to display a data structure to the system using a display device (For instance, the generative text output, using some of the techniques of the present disclosure, may trigger the performance of actions at a client device, such as the display, transmission, and/or the like of data reflective of generative text [0182]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Jain to incorporate the teachings of Kelly for the benefit of the Large Language Model (LLM) which may be iteratively retrained to continuously and automatically improve without a human in the loop (Kelly [0091]).
Regarding claim 10, Modified Jain teaches the apparatus of claim 9, Kelly teaches wherein the graphical user interface comprises a plurality of visual elements (the user interface may be a user application, browser, user interface, and/or similar words used herein interchangeably executing on and/or accessible via the client computing entity 102 to interact with and/or cause display of information/data from the computing entity 200 [0046]; The external computing entities 108, for example, may include and/or be associated with one or more entities that may be configured to receive, transmit, store, manage, and/or facilitate datasets, such as the document data store [0030])
associated with a plurality of event handlers (the term “document data store” refers to a data structure that describes data associated with controlled text document domain [0059]; In addition, or alternatively, the generative service may include a remote service that is implemented by a remote computing system [0158]; The Examiner notes “document data store” refers to a data structure that can be remotely operated in response to a user interaction).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of Modified Jain to incorporate the teachings of Kelly for the benefit of the Large Language Model (LLM) which may be iteratively retrained to continuously and automatically improve without a human in the loop (Kelly [0091]).
Regarding claim 14, claim 14 is similar to claim 4. It is rejected in the same manner and reasoning applying.
Regarding claim 15, claim 15 is similar to claim 5. It is rejected in the same manner and reasoning applying.
Regarding claim 17, claim 17 is similar to claim 7. It is rejected in the same manner and reasoning applying.
Regarding claim 19, claim 19 is similar to claim 9. It is rejected in the same manner and reasoning applying.
Regarding claim 20, claim 20 is similar to claim 10. It is rejected in the same manner and reasoning applying.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MORIAM MOSUNMOLA GODO whose telephone number is (571)272-8670. The examiner can normally be reached Monday-Friday 8:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle T. Bechtold can be reached on (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.G./Examiner, Art Unit 2148
/MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148