DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is a non-final rejection
Claims 1-2, 4, 8, 10-11, 13, 18-19, 21, 23, 25, 27-30, 35-36, 38-39 are pending
Claims 1-2, 4, 8, 10-11, 13, 18-19, 21, 23, 25, 27-30, 35-36, 38-39 are rejected under 35 USC § 101
Claims 36, 38 are objected
Claims 1-2, 4, 10-11, 13, 18, 23, 25, 39 are rejected under 35 USC § 102
Claims 8, 19, 21, 27-30, 35 are rejected under 35 USC § 103
Priority
Acknowledgement is made of Applicant’s claim for a foreign priority date of 10-24-2023
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4, 8, 10-11, 13, 18-19, 21, 23, 25, 27-30, 35-36, 38-39 are not patent eligible because the claimed invention is directed to an abstract idea without significantly more.
Analysis
First, claims are directed to one or more of the following statutory categories: a process, a machine, a manufacture, and a composition of matter. Regarding claims 1-2, 4, 8, 10-11, 13, 18-19, 21, 23, 25, 27-30, 35-36, 38-39 the claims recite an abstract idea of “performing a clinical assessment”.
Independent Claims 1, and 39 are rejected under 35 U.S.C 101 based on the following analysis.
-Step 1 (Does the claim fall within a statutory category? YES): claims 1 and 39 recite a method and system of “performing a clinical assessment”.
-Step 2A Prong One (Does the claim fall within at least one of the groupings of abstract ideas?: YES): The claimed invention:
A method and system comprising:
providing a first input to a .. model, the first input comprising template data encoding a template for carrying out a part of the clinical assessment;
providing a second input to the .. model, the second input comprising assessment data recorded during the clinical assessment;
wherein the first input is provided to the .. model to condition the .. model to provide an output based on the second input for use in the clinical assessment; and
using the output from the .. model to perform the clinical assessment.
belonging to the grouping of mental processes under concepts performed in the human mind (including an observation, evaluation, judgement, opinion) as it recites “performing a clinical assessment”. Alternatively, the selected abstract idea belongs to the grouping of certain methods of organizing human activity under managing personal behavior or relationships or interactions between people as it recites “performing a clinical assessment”. (refer to MPP 2106.04(a)(2)). Accordingly this claim recites an abstract idea.
-Step 2A Prong Two (Are there additional elements in the claim that imposes a meaningful limit on the abstract idea? NO). Claims 1 and 39 recites:
machine learning model;
Amounting to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, these additional elements, when considered separately and as an ordered combination do not integrate the judicial exception/abstract idea into a “practical application” of the judicial exception because they do not impose any meaningful limit on practicing the judicial exception.
-Step 2B (Does the additional elements of the claim provide an inventive concept?: NO. As discussed previously with respect to Step 2A Prong Two, claims 1 and 39 recites:
machine learning model;
Amounting to mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)) Accordingly, even when viewed as a whole the claim does not provide an inventive concept (significantly more than the abstract idea) and hence the claim is ineligible.
Dependent Claims:
Step 2A Prong One: The following dependent claims recites additional limitations that further define the abstract idea of “performing a clinical assessment”. The claim limitations include:
Claim 2: wherein the assessment data encodes a response of a subject during the clinical assessment, wherein the .. method comprises using the output to monitor or diagnose a health condition of the subject.
Claim 4: wherein the .. model comprises a generative .. model;
Claim 8: wherein the clinical assessment comprises a task for assessing a cognitive function or a neurological health condition of a subject and the assessment data encodes response of the subject during the task;
Claim 10: wherein the assessment data comprises one or more of audio data, text data, video data, image data;
Claim 11: wherein the template for administering the clinical assessment comprises one or more of:
instructions for performing the clinical assessment;
an output schema indicating how the output of the ... model should be formatted; and
example responses provided by a subject or administrator to tasks within the clinical assessment
Claim 13: wherein the first input and second input are combined and input into the .. model, such that the model is conditioned on content of the template to provide an adapted output in which probabilities of possible outputs are adjusted in view of the template
Claim 18: wherein the clinical assessment comprises a speech-based clinical assessment comprising tasks instructed by a human .. and spoken responses to the tasks provided by a subject; wherein:
the template of the first input comprises text data defining intended content of the clinical assessment;
the assessment data of the second input comprises speech data encoding a response of the subject to an instructed task, where speech data comprises one or both of text and audio data;
wherein-the .. model is a generative .. model trained to generate, based on the second input, an output usable to monitor or diagnose a health condition of the subject; and
where-in the .. method comprises conditioning the .. model on the first input to bias the .. model to adapt the generated output in view of knowledge of the intended content of the clinical assessment
Claim 19: wherein the output comprises a transformed version of the assessment data usable to monitor or diagnose a health condition, preferably wherein the output comprises one or more of:
a transcription of the speech data;
a diarised version of the speech data, where sections of the speech data are attributed to different participants in the clinical assessment data;
and a segmented version of the speech data, in which the speech data is segmented according to a structure of the clinical assessment defined in the template
Claim 21: wherein the assessment data comprises audio data encoding speech recorded during the clinical assessment and the .. model comprises a transcription model, the transcription model comprising a .. model trained to output text data comprising a transcript of the speech, the method comprising:
conditioning the transcription model by inputting template data encoding one or both of a script for the speech-based assessment and a sample subject response, thereby conditioning the model to assign a higher probability to words more likely to be produced during the task, preferably wherein the template data includes a sample patient response including disfluencies to condition the transcription model to include disfluencies in the transcription
Claim 23: wherein the .. model comprises a rating model, the rating model comprising a .. model for outputting a rating indicating performance of a subject in an assessment task based on the assessment data, the method comprising:
providing a first input to the rating model, the first input comprising template data encoding one or both of an administration template comprising an intended format of the clinical assessment and a rating template comprising instructions for rating a subject's response to an assessment task;
providing a second input to the .. model, the second input comprising assessment data encoding the subject's response to an assessment task;
wherein the first input is provided to the .. model to condition the rating model to provide a rating based on the assessment data in view of the template data;
receiving a rating of an assessment task; and
outputting an indication of a health condition of the subject based on the rating of the assessment task
Claim 25: wherein the .. model comprises an administration model, the administration model comprising a .. model for automating the instruction of one or more tasks for monitoring or diagnosing a health condition of a subject, the method comprising:
providing a first input to the administration model, the first input comprising template data encoding an administration template comprising instructions for administering a part of the clinical assessment; and
providing a second input to the .. model, the second input comprising assessment data recorded during the clinical assessment, the assessment data comprising data encoding a response of the subject to a task administered by the .. model;
wherein the administration model maps the second input to a structured output usable to initiate an action to administer the clinical assessment; and
wherein the administration model is conditioned on the first input so that its outputs are determined in view of the administration template
Claim 27: wherein the administration model is trained to generate a structured text output encoding the action to call, the structured text output preferably comprising a structured JSON format
Claim 28: further comprising:
inputting the output of the administration model into a .. model, the ..model comprising a .. model trained to output synthesised speech based on a text input;
such that the administration model outputs text encoding instructions to the subject based on the received response of the subject encoded in the assessment data, and the .. model generates an audio stream comprising instructions to the subject, thereby facilitating automated audio-verbal administration of the clinical assessment
Claim 29: further comprising:
receiving a real-time stream of assessment data during the clinical assessment;
and inputting sequential sections of the assessment data into the administration model in order to generate actions to administer the clinical assessment in real-time
Claim 30: wherein the stream of assessment data comprises audio data, the computer-implemented method further comprising:
inputting sequential sections of the audio data into a ..model, the .. model comprising a .. model trained to output text data comprising a transcript of an input section of audio data, wherein the .. model is conditioned on the first input;
inputting the text data output by the .. model into the administration model, wherein the administration model is a model trained to output structured text for initiating an action to administer the clinical assessment
Claim 35: wherein the second input comprises one or both of: a video recording of the clinical assessment and image data related to a drawing-based task of the clinical assessment
Claim 36: further comprising providing a third input to the .. model, the third input comprising a first rating indicating a subject's performance in an assessment task, the first rating suitable for monitoring or diagnosing a health condition; wherein:
the template data includes instructions for reviewing the first rating, the second input includes assessment data including a subject response to a task of the clinical assessment, and the output comprises a review rating that evaluates the quality of the first rating1 wherein the assessment data further comprises a rating sheet completed by an administrator and used in providing the first rating, where the template data comprises instructions for checking the rating sheet
Claim 38:
encoding each segment of the template data into a respective representation;
splitting the assessment data into a plurality of sections and encoding each section into a respective representation;
using a pairwise scoring algorithm to compute a similarity of each of the template segment representations with each of the assessment data representations;
using an alignment algorithm to determine an optimal alignment of the plurality of sections of the assessment data with the segments of the template using the computed similarity between the template segment representations and the assessment data representations;
using the optimal alignment to split the assessment data into segments corresponding to segments of the template; and
providing one of the segments of the assessment data as an input to the machine learning model for analysing the assessment data.
Step 2A Prong Two (Are there additional elements in the claim that imposes a meaningful limit on the abstract idea? NO). The following dependent claims recite mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, the claims as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims include:
Claim 2: computer-implemented method.
Claim 4:
machine learning model
generative machine learning model;
Claim 11:
machine learning model;
Claim 13: machine learning model;
Claim 18: computer-implemented administrator; wherein:
machine learning model is
generative machine learning model; and
computer-implemented
machine learning model
Claim 21:
machine learning model comprises a transcription model,
the transcription model comprising a generative audio-to-text model
conditioning the transcription model
Claim 23: machine learning model
Claim 25: machine learning
Claim 28:
speech synthesis model,
text-to-audio generative machine learning model
Claim 30:
transcription model,
a generative machine learning model
text-to text generative model
Claim 36: machine learning.
Step 2B (Does the additional elements of the claim provide an inventive concept?: NO). As discussed previously with respect to Step 2A Prong Two, the following dependent claims recite mere instructions to implement an abstract idea on a computer, or merely use a computer as a tool to implement the abstract idea. (refer to MPEP 2106.05(f)). Accordingly, the claim does not provide an inventive concept (significantly more than the abstract idea) and hence the claim is ineligible. The claims include:
Claim 2: computer-implemented method.
Claim 4:
machine learning model
generative machine learning model;
Claim 11:
machine learning model;
Claim 13: machine learning model;
Claim 18: computer-implemented administrator; wherein:
machine learning model is
generative machine learning model; and
computer-implemented
machine learning model
Claim 21:
machine learning model comprises a transcription model,
the transcription model comprising a generative audio-to-text model
conditioning the transcription model
Claim 23: machine learning model
Claim 25: machine learning
Claim 28:
speech synthesis model,
text-to-audio generative machine learning model
Claim 30:
transcription model,
a generative machine learning model
text-to text generative model
Claim 36: machine learning
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 4, 10-11, 13, 18, 23, 25, 39 are rejected by 35 U.S.C. 102(a)(2) as being anticipated by Nakasian et.al (US 20250125007 A1) hereinafter “Nakasian”
Regarding claims 1 and 39 Nakasian teaches:
providing a first input (input from the user computing device) to a machine learning model (machine learning .. model), the first input (input ... indicating the type(s) of health data ) comprising template data encoding a template (select a template phenotype ) for carrying out a part of the clinical assessment; (See at least [0052] via: “...Referring to FIGS. 1 and 4, the phenotype routine generator module 60 may include a template phenotype routine selection module 62 and a parameter adjustment module 64. The template phenotype routine selection module 62 may include a phenotype selection module 62A, a performance metric module 62B, and a designation module 62C. The phenotype selection module 62A is configured to select a template phenotype routine from among a plurality of template phenotype routines that are validated and stored in the phenotype database 70 based on an input from the user computing device indicating the type(s) of health data and/or the desired analysis type. As used herein, “phenotype routine” refers to a machine learning and/or artificial intelligence model that is configured to generate a relationship model associated with the structured data...”)
providing a second input to the machine learning model (machine-learning based relationship model), the second input comprising assessment data (receives data (e.g., health data)) recorded during the clinical assessment (health database 30); (See at least [0067] via: “... Referring to FIG. 7, a flowchart 700 of an example routine for generating and displaying a machine-learning based relationship model is shown. At 702, the data ingestion module 40 receives data (e.g., health data) from the at least one health database 30. At 704, the structured data generator module 50 performs a large language model routine (e.g., a GPT routine) to transform the data into structured data...”)
wherein the first input is provided to the machine learning (machine learning ) model to condition the machine learning model to provide an output based on the second input (transform the data into structured data) for use in the clinical assessment (identifies, ... various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data); (See at least [0067] via: “...At 704, the structured data generator module 50 performs a large language model routine (e.g., a GPT routine) to transform the data into structured data. At 706, the phenotype routine generator module 60 generates a phenotype routine by selectively modifying at least one parameter of a selected template phenotype routine from the phenotype database 70...”; in addition see at least [0035] via: “...The computing system described herein, which integrates the GPT with a customizable machine learning system, provides several improvements to conventional computing systems that obtain, process, and analyze health data for medical-based research, such as clinical and observational studies. As an example, the GPT described herein transforms the health data into structured data that identifies, for example, various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data...”; in addition see at least [0018] via: “...receiving data from at least one health database of the plurality of health databases, performing a generative pretrained transformer routine to transform the data into structured data, selecting a template phenotype routine of the plurality of template phenotype routines based on a comparison of at least one performance metric of the selected template phenotype routine and a model selection criteria, generating a phenotype routine by selectively modifying, based on a comparison of the at least one performance metric of and an analytic threshold, at least one parameter of the selected template phenotype routine, analyzing the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data, and transmitting a command to a user interface to generate a display corresponding to the output.) and
using the output from the machine learning model (machine learning system) to perform the clinical assessment (analyze the data to identify .. models for ... improving clinical decision-making, predicting clinical trial success rates, predicting responses to treatments, predicting potential adverse events, identifying associations between exposures and diseases, identifying connections between drugs, among other implementations). (See at least [0067] via: “...At 708, the analysis module 80 analyzes the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data. At 710, the analysis module 80 transmits a command to a user interface (e.g., a user interface of the user computing device 10, such as display) to generate a display corresponding to the output...”; in addition see at least [0034] via: “...a computing system that includes a plurality of distinct machine learning and/or artificial intelligence systems (e.g., an ensemble machine learning system) that collectively operate to obtain and process relevant health data and employ an accurate model for analyzing the processed health data. Specifically, the computing system employs a generative pretrained transformer (GPT) module to transform biomedical literature, clinical trial records, observational study reports, regulatory updates, published content, and/or other types of health data into structured data that is suitable for downstream analysis by an additional machine learning system...”; in addition see at least [0035] via: “...The computing system described herein, which integrates the GPT with a customizable machine learning system, provides several improvements to conventional computing systems that obtain, process, and analyze health data for medical-based research, such as clinical and observational studies. As an example, the GPT described herein transforms the health data into structured data that identifies, for example, various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data...”; in addition see at least [0036] via: “...Accordingly, the transformation routines performed by the GPT enable a separate machine learning system to accurately and more efficiently analyze the data to identify causal pathways, patterns, trends, relationships, and/or predictive models for improving patient care, improving clinical decision-making, predicting clinical trial success rates, predicting responses to treatments, predicting potential adverse events, identifying associations between exposures and diseases, identifying connections between drugs, among other implementations. Moreover, the transformation routines performed by the GPT enable the computing system to produce tailored summaries, provide explanations for predictive models, or amalgamate disjointed pieces of health data to generate new structured data to be analyzed by the separate machine learning system..”)
Regarding claim 2: Nakasian teaches the invention as claimed and detailed above with respect to claim 1. Nakasian also teaches:
wherein the assessment data encodes a response of a subject during the clinical assessment, wherein the computer-implemented method comprises using the output to monitor or diagnose a health condition of the subject. (See at least [0034] via: “...a computing system that includes a plurality of distinct machine learning and/or artificial intelligence systems (e.g., an ensemble machine learning system) that collectively operate to obtain and process relevant health data and employ an accurate model for analyzing the processed health data. Specifically, the computing system employs a generative pretrained transformer (GPT) module to transform biomedical literature, clinical trial records, observational study reports, regulatory updates, published content, and/or other types of health data into structured data that is suitable for downstream analysis by an additional machine learning system...”; in addition see at least [0035] via: “...The computing system described herein, which integrates the GPT with a customizable machine learning system, provides several improvements to conventional computing systems that obtain, process, and analyze health data for medical-based research, such as clinical and observational studies. As an example, the GPT described herein transforms the health data into structured data that identifies, for example, various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data...”; in addition see at least [0036] via: “...Accordingly, the transformation routines performed by the GPT enable a separate machine learning system to accurately and more efficiently analyze the data to identify causal pathways, patterns, trends, relationships, and/or predictive models for improving patient care, improving clinical decision-making, predicting clinical trial success rates, predicting responses to treatments, predicting potential adverse events, identifying associations between exposures and diseases, identifying connections between drugs, among other implementations. Moreover, the transformation routines performed by the GPT enable the computing system to produce tailored summaries, provide explanations for predictive models, or amalgamate disjointed pieces of health data to generate new structured data to be analyzed by the separate machine learning system..”)
Regarding claim 4: Nakasian teaches the invention as claimed and detailed above with respect to claim 1. Nakasian also teaches:
wherein the machine learning model comprises a generative machine learning model. (See at least [0034] via: “...The present disclosure provides a computing system that includes a plurality of distinct machine learning and/or artificial intelligence systems (e.g., an ensemble machine learning system) that collectively operate to obtain and process relevant health data and employ an accurate model for analyzing the processed health data. Specifically, the computing system employs a generative pretrained transformer (GPT) module to transform biomedical literature, clinical trial records, observational study reports, regulatory updates, published content, and/or other types of health data into structured data that is suitable for downstream analysis by an additional machine learning system..”)
Regarding claim 10: Nakasian teaches the invention as claimed and detailed above with respect to claim 1. Nakasian also teaches:
wherein the assessment data comprises one or more of audio data, text data, video data, image data. (See at least [0006] via: “...receiving data from at least one health database of the plurality of health databases, performing a large language model (LLM) routine to transform the data into structured data,..”; in addition at least [0007] via: “.. the LLM routine is one of a generative pretrained transformer routine or a natural language processing routine..”)
Regarding claim 11: Nakasian teaches the invention as claimed and detailed above with respect to claim 1. Nakasian also teaches:
wherein the template for administering the clinical assessment comprises one or more of:
instructions for performing the clinical assessment;
an output schema indicating how the output of the machine learning model should be formatted; (See at least [0006] via: “...receiving data from at least one health database of the plurality of health databases, performing a large language model (LLM) routine to transform the data into structured data,..”; in addition see at least [0052] via: “...the phenotype routine generator module 60 may include a template phenotype routine selection module 62 and a parameter adjustment module 64. The template phenotype routine selection module 62 may include a phenotype selection module 62A, a performance metric module 62B, and a designation module 62C. The phenotype selection module 62A is configured to select a template phenotype routine from among a plurality of template phenotype routines that are validated and stored in the phenotype database 70 based on an input from the user computing device indicating the type(s) of health data and/or the desired analysis type. As used herein, “phenotype routine” refers to a machine learning and/or artificial intelligence model that is configured to generate a relationship model associated with the structured data...”; and
example responses provided by a subject or administrator to tasks within the clinical assessment
Regarding claim 13: Nakasian teaches the invention as claimed and detailed above with respect to claim 1. Nakasian also teaches:
wherein the first input and second input are combined and input into the machine learning model, such that the model is conditioned on content of the template to provide an adapted output in which probabilities of possible outputs are adjusted in view of the template. (See at least [0018] via: “...receiving data from at least one health database of the plurality of health databases, performing a generative pretrained transformer routine to transform the data into structured data, selecting a template phenotype routine of the plurality of template phenotype routines based on a comparison of at least one performance metric of the selected template phenotype routine and a model selection criteria, generating a phenotype routine by selectively modifying, based on a comparison of the at least one performance metric of and an analytic threshold, at least one parameter of the selected template phenotype routine, analyzing the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data, and transmitting a command to a user interface to generate a display corresponding to the output.) ..”)
Regarding claim 18: Nakasian teaches the invention as claimed and detailed above with respect to claim 1. Nakasian also teaches:
wherein the clinical assessment comprises a speech-based clinical assessment comprising tasks instructed by a human or computer-implemented administrator and spoken responses to the tasks provided by a subject; wherein:
(See at least [0006] via: “...a plurality of health databases, where each of the plurality of health databases has a respective syntactic standard, a phenotype routine database that stores a plurality of template phenotype routines, and a nontransitory computer-readable medium including instructions that are executable by the processor. The instructions include receiving data from at least one health database of the plurality of health databases, performing a large language model (LLM) routine to transform the data into structured data, generating a phenotype routine by selectively modifying at least one parameter of a selected template phenotype routine of the plurality of template phenotype routines, analyzing the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data, and transmitting a command to a user interface to generate a display corresponding to the output...”; in addition see at least [0007] via: “...the LLM routine is one of a generative pretrained transformer routine or a natural language processing routine...”)
the template of the first input comprises text data defining intended content of the clinical assessment; (See at least [0006] via: “...a plurality of health databases, where each of the plurality of health databases has a respective syntactic standard, a phenotype routine database that stores a plurality of template phenotype routines, and a nontransitory computer-readable medium including instructions that are executable by the processor. The instructions include receiving data from at least one health database of the plurality of health databases, performing a large language model (LLM) routine to transform the data into structured data, generating a phenotype routine by selectively modifying at least one parameter of a selected template phenotype routine of the plurality of template phenotype routines, analyzing the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data, and transmitting a command to a user interface to generate a display corresponding to the output...”; in addition see at least [0007] via: “...the LLM routine is one of a generative pretrained transformer routine or a natural language processing routine...”; in addition see at least [0052] via: “...Referring to FIGS. 1 and 4, the phenotype routine generator module 60 may include a template phenotype routine selection module 62 and a parameter adjustment module 64. The template phenotype routine selection module 62 may include a phenotype selection module 62A, a performance metric module 62B, and a designation module 62C. The phenotype selection module 62A is configured to select a template phenotype routine from among a plurality of template phenotype routines that are validated and stored in the phenotype database 70 based on an input from the user computing device indicating the type(s) of health data and/or the desired analysis type. As used herein, “phenotype routine” refers to a machine learning and/or artificial intelligence model that is configured to generate a relationship model associated with the structured data...”)
the assessment data of the second input comprises speech data encoding a response of the subject to an instructed task, where speech data comprises one or both of text and audio data; (See at least [0067] via: “...Referring to FIG. 7, a flowchart 700 of an example routine for generating and displaying a machine-learning based relationship model is shown. At 702, the data ingestion module 40 receives data (e.g., health data) from the at least one health database 30. At 704, the structured data generator module 50 performs a large language model routine (e.g., a GPT routine) to transform the data into structured data. ..”)
wherein-the machine learning model is a generative machine learning model trained to generate, based on the second input, an output usable to monitor or diagnose a health condition of the subject; (See at least [0067] via: “...At 704, the structured data generator module 50 performs a large language model routine (e.g., a GPT routine) to transform the data into structured data. At 706, the phenotype routine generator module 60 generates a phenotype routine by selectively modifying at least one parameter of a selected template phenotype routine from the phenotype database 70...”; in addition see at least [0035] via: “...The computing system described herein, which integrates the GPT with a customizable machine learning system, provides several improvements to conventional computing systems that obtain, process, and analyze health data for medical-based research, such as clinical and observational studies. As an example, the GPT described herein transforms the health data into structured data that identifies, for example, various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data...”; in addition see at least [0018] via: “...receiving data from at least one health database of the plurality of health databases, performing a generative pretrained transformer routine to transform the data into structured data, selecting a template phenotype routine of the plurality of template phenotype routines based on a comparison of at least one performance metric of the selected template phenotype routine and a model selection criteria, generating a phenotype routine by selectively modifying, based on a comparison of the at least one performance metric of and an analytic threshold, at least one parameter of the selected template phenotype routine, analyzing the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data, and transmitting a command to a user interface to generate a display corresponding to the output.)and
where-in the computer-implemented method comprises conditioning the machine learning model on the first input to bias the machine learning model to adapt the generated output in view of knowledge of the intended content of the clinical assessment. (See at least [0067] via: “...At 708, the analysis module 80 analyzes the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data. At 710, the analysis module 80 transmits a command to a user interface (e.g., a user interface of the user computing device 10, such as display) to generate a display corresponding to the output...”; in addition see at least [0034] via: “...a computing system that includes a plurality of distinct machine learning and/or artificial intelligence systems (e.g., an ensemble machine learning system) that collectively operate to obtain and process relevant health data and employ an accurate model for analyzing the processed health data. Specifically, the computing system employs a generative pretrained transformer (GPT) module to transform biomedical literature, clinical trial records, observational study reports, regulatory updates, published content, and/or other types of health data into structured data that is suitable for downstream analysis by an additional machine learning system...”; in addition see at least [0035] via: “...The computing system described herein, which integrates the GPT with a customizable machine learning system, provides several improvements to conventional computing systems that obtain, process, and analyze health data for medical-based research, such as clinical and observational studies. As an example, the GPT described herein transforms the health data into structured data that identifies, for example, various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data...”; in addition see at least [0036] via: “...Accordingly, the transformation routines performed by the GPT enable a separate machine learning system to accurately and more efficiently analyze the data to identify causal pathways, patterns, trends, relationships, and/or predictive models for improving patient care, improving clinical decision-making, predicting clinical trial success rates, predicting responses to treatments, predicting potential adverse events, identifying associations between exposures and diseases, identifying connections between drugs, among other implementations. Moreover, the transformation routines performed by the GPT enable the computing system to produce tailored summaries, provide explanations for predictive models, or amalgamate disjointed pieces of health data to generate new structured data to be analyzed by the separate machine learning system..”)
Regarding claim 23: Nakasian teaches the invention as claimed and detailed above with respect to claim 1. Nakasian also teaches:
wherein the machine learning model comprises a rating model, the rating model comprising a machine learning model for outputting a rating indicating performance of a subject in an assessment task based on the assessment data, the method comprising: (See at least [0052] via: “...Referring to FIGS. 1 and 4, the phenotype routine generator module 60 may include a template phenotype routine selection module 62 and a parameter adjustment module 64. The template phenotype routine selection module 62 may include a phenotype selection module 62A, a performance metric module 62B, and a designation module 62C. The phenotype selection module 62A is configured to select a template phenotype routine from among a plurality of template phenotype routines that are validated and stored in the phenotype database 70 based on an input from the user computing device indicating the type(s) of health data and/or the desired analysis type. As used herein, “phenotype routine” refers to a machine learning and/or artificial intelligence model that is configured to generate a relationship model associated with the structured data...”)
providing a first input to the rating model, the first input comprising template data encoding one or both of an administration template comprising an intended format of the clinical assessment and a rating template comprising instructions for rating a subject's response to an assessment task; (See at least [0052] via: “...Referring to FIGS. 1 and 4, the phenotype routine generator module 60 may include a template phenotype routine selection module 62 and a parameter adjustment module 64. The template phenotype routine selection module 62 may include a phenotype selection module 62A, a performance metric module 62B, and a designation module 62C. The phenotype selection module 62A is configured to select a template phenotype routine from among a plurality of template phenotype routines that are validated and stored in the phenotype database 70 based on an input from the user computing device indicating the type(s) of health data and/or the desired analysis type. As used herein, “phenotype routine” refers to a machine learning and/or artificial intelligence model that is configured to generate a relationship model associated with the structured data...”)
providing a second input to the machine learning model, the second input comprising assessment data encoding the subject's response to an assessment task; (See at least [0067] via: “... Referring to FIG. 7, a flowchart 700 of an example routine for generating and displaying a machine-learning based relationship model is shown. At 702, the data ingestion module 40 receives data (e.g., health data) from the at least one health database 30. At 704, the structured data generator module 50 performs a large language model routine (e.g., a GPT routine) to transform the data into structured data...”)
wherein the first input is provided to the machine learning model to condition the rating model to provide a rating based on the assessment data in view of the template data; (See at least [0067] via: “...At 704, the structured data generator module 50 performs a large language model routine (e.g., a GPT routine) to transform the data into structured data. At 706, the phenotype routine generator module 60 generates a phenotype routine by selectively modifying at least one parameter of a selected template phenotype routine from the phenotype database 70...”; in addition see at least [0035] via: “...The computing system described herein, which integrates the GPT with a customizable machine learning system, provides several improvements to conventional computing systems that obtain, process, and analyze health data for medical-based research, such as clinical and observational studies. As an example, the GPT described herein transforms the health data into structured data that identifies, for example, various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data...”; in addition see at least [0018] via: “...receiving data from at least one health database of the plurality of health databases, performing a generative pretrained transformer routine to transform the data into structured data, selecting a template phenotype routine of the plurality of template phenotype routines based on a comparison of at least one performance metric of the selected template phenotype routine and a model selection criteria, generating a phenotype routine by selectively modifying, based on a comparison of the at least one performance metric of and an analytic threshold, at least one parameter of the selected template phenotype routine, analyzing the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data, and transmitting a command to a user interface to generate a display corresponding to the output.)
receiving a rating of an assessment task; (See at least [0018] via: “...receiving data from at least one health database of the plurality of health databases, performing a generative pretrained transformer routine to transform the data into structured data, selecting a template phenotype routine of the plurality of template phenotype routines based on a comparison of at least one performance metric of the selected template phenotype routine and a model selection criteria, generating a phenotype routine by selectively modifying, based on a comparison of the at least one performance metric of and an analytic threshold, at least one parameter of the selected template phenotype routine, analyzing the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data, and transmitting a command to a user interface to generate a display corresponding to the output..’; in addition see at least [0035] via: “...The computing system described herein, which integrates the GPT with a customizable machine learning system, provides several improvements to conventional computing systems that obtain, process, and analyze health data for medical-based research, such as clinical and observational studies. As an example, the GPT described herein transforms the health data into structured data that identifies, for example, various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data...”) and
outputting an indication of a health condition of the subject based on the rating of the assessment task. (See at least [0067] via: “...At 708, the analysis module 80 analyzes the structured data based on the phenotype routine to generate an output that defines a relationship model associated with the structured data. At 710, the analysis module 80 transmits a command to a user interface (e.g., a user interface of the user computing device 10, such as display) to generate a display corresponding to the output...”; in addition see at least [0034] via: “...a computing system that includes a plurality of distinct machine learning and/or artificial intelligence systems (e.g., an ensemble machine learning system) that collectively operate to obtain and process relevant health data and employ an accurate model for analyzing the processed health data. Specifically, the computing system employs a generative pretrained transformer (GPT) module to transform biomedical literature, clinical trial records, observational study reports, regulatory updates, published content, and/or other types of health data into structured data that is suitable for downstream analysis by an additional machine learning system...”; in addition see at least [0035] via: “...The computing system described herein, which integrates the GPT with a customizable machine learning system, provides several improvements to conventional computing systems that obtain, process, and analyze health data for medical-based research, such as clinical and observational studies. As an example, the GPT described herein transforms the health data into structured data that identifies, for example, various medical entities (e.g., diseases, genes, symptoms, medications, and procedures) present in unstructured text data and determines relationships among the unstructured data...”; in addition see at least [0036] via: “...Accordingly, the transformation routines performed by the GPT enable a separate machine learning system to accurately and more efficiently analyze the data to identify causal pathways, patterns, trends, relationships, and/or predictive models for improving patient care, improving clinical decision-making, predicting clinical trial success rates, predicting responses to treatments, predicting potential adverse events, identifying associations between exposures and diseases, identifying connections between drugs, among other implementations. Moreover, the transformation routines performed by the GPT enable the computing system to produce tailored summaries, provide explanations for predictive models, or amalgamate disjointed pieces of health data to generate new structured data to be analyzed by the separate machine learning system..”; in addition see at least [0018] via: “...receiving data from at least one health database of the plurality of health databases, performing a generative pretrained transformer routine to transform the data into structured data, selecting a template phenotype