Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
2. This Final Office Action is responsive to Applicants’ reply with amendments and arguments, as received 11/7/25. Claims 1-20 were pending. By way of the reply, claims 9 and 18 are cancelled, and claims 21-22 are added. Hence, claims 1-8, 10-17, and 19-22 remain pending, of which claims 1 and 16-17 are independent.
3. Based on Applicants’ reply, the previously-presented rejections to claims 8, 11-13, and 18-19 under 35 U.S.C. 112(b) are now withdrawn.
4. The previously-presented rejections under 35 U.S.C. 101 are effectively maintained for the remaining pending claims.
Claim Rejections - 35 USC § 101
5. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
6. Claims 1-14 and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Taking independent claim 1 as representative, Step 1 of the Subject Matter Eligibility Test (MPEP 2106) determines whether the claimed invention is directed to a process, machine, manufacture or composition of matter. In this instance, the claim is directed to “...” which qualifies as “...” under the Step.
Moving on to Step 2A Prong One, we determine whether the claim recites an abstract idea, law of nature, or natural phenomenon. MPEP 2106.04 (I-II) and 2106.04(a).
Claim 1 recites, in part, the following as reproduced just below. The Examiner has bolded the limiting features that are directed to an abstract idea (to be discussed in 2A Prong One) and underlined the features that are directed to an additional element (to be discussed in 2A Prong Two and 2B):
A method implemented via a computing device, the method comprising:
receiving, by the computing device, data associated with a subject having a neurodevelopmental disorder (NDD), the data associated with the subject comprising demographic data, schooling data, family medical data, prior therapy data, observational assessment data, medication data, goals data, or combinations thereof, and
evaluating, by the computing device, the data associated with the subject via a neurodevelopmental disorder treatment recommendation (NDDTR) model, wherein the NDDTR model is configured to evaluate the data associated with the subject to determine a therapy recommendation, wherein the NDDTR model is a machine learning model selected from the group consisting of a deep learning model, a generative adversarial network, a computational neural network model, a recurrent neural network model, a perceptron model, a classical tree-based machine learning model, a decision tree type model, a regression type model, a classification model, a reinforcement learning model, and combinations thereof, and wherein the therapy recommendation comprises a standard of care associated with provision of applied behavior analysis (ABA) therapy.
In view of the bolded features noted above by the Examiner, the claim is principally directed to a method that involves receiving data about a subject and evaluating the data using a model to determine a recommendation. A person can receive information, consider it, and use it to make a recommendation, by way of their mental steps, in accordance with a particular model, e.g. a treatment model that is essentially a protocol or a defined set of best practices or something of that ilk. On this basis, the bolded features noted above direct the claim’s interpretation to be that of largely an abstract idea, with some additional elements (as underlined) included which will be discussed in the analysis to come.
Moving on to Step 2A Prong Two, we determine whether the claim recites additional elements that integrate the judicial exception into a practical application. MPEP 2106.04(d). The additional elements are as noted below:
The use of a computing device to implement what is essentially the abstract idea as discussed above. This is akin to merely reciting the words “apply it” or its equivalent with the abstract idea, or merely implementing the abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea. Hence, it is not sufficient to integrate the judicial exception into a practical application.
The clarification of the subject having a neurodevelopmental disorder, as recited. The Examiner reasons that this merely provides some context for the person whose data is being subjected to the abstract idea, and that such a caveat alone does not meaningfully integrate what is otherwise just a thought process into a practical idea. Said another way, merely clarifying what the subject data is or is about does not make an abstract idea sufficiently integrated into a practical application.
The clarification of the received data being comprised of demographic data, schooling data, family medical data, prior therapy data, observational assessment data, medication data, goals data, or combinations thereof. As discussed just above, this merely provides some context for the person whose data is being subjected to the abstract idea, and that such a caveat alone does not meaningfully integrate what is otherwise just a thought process into a practical idea. Said another way, merely clarifying what the subject data is or is about does not make an abstract idea sufficiently integrated into a practical application.
The clarification of the recommendation being a therapy recommendation comprising of a standard of care, and now a standard of care associated with ABA therapy. While the Examiner acknowledges that this provides a real world case for how the abstract idea may be practiced or used, it does not make the abstract idea any less abstract by simply clarifying what the output of the mental process / abstract idea represents in real world terms.
The Examiner notes that, by way of the amendment, the model has been clarified to be a machine learning model selected from a list of different machine learning model types. In the Examiner’s view, this limitation, when recited at this high level of generality, is essentially taking the limitation or step of an evaluation or judgement or mental step and saying “apply it” to a computer environment. Hence, it is merely applying an abstract idea to a computer-implemented environment, and is therefore not sufficient to integrate it into a practical application.
Finally, in Step 2B, we evaluate the additional elements to determine whether they amount to significantly more than the judicial exception. MPEP 2106.05. Revisiting the additional elements as discussed above per Step 2A Prong Two, the Examiner does not believe the additional elements are persuasively sufficient to provide significantly more than the abstract idea as otherwise characterized by the Examiner. For example, the high-level and essentially mere use of a computing device to implement the abstract idea as discussed above is akin to merely reciting the words “apply it” or its equivalent with the abstract idea, or merely implementing the abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea; for that reason, it cannot be understood to provide significantly more than the judicial exception. Further, the clarification of the subject and information about the subject essentially provides a description or context for the data; however, this clarification does not make the mental process of the abstract idea any less abstract and therefore cannot be understood to add significantly more. Finally, the clarification of the model to be a particular type of machine learning model with no further meaningful detail about that cannot be said to be provide significantly more than an abstract idea.
Based on the reasoning as provided above, independent claim 1 is not eligible subject matter.
Independent claim 16 includes many of the same or similar limitations as claim 1, and are therefore rejected under the same rationale.
Further, claim 16 is expressly directed to a computing system, which the Examiner understands to be a type of “machine” under Step 1. The additional context provided by the system per claim 7 (e.g., processor, non-transitory computer-readable medium) does not meaningfully alter the result of the Examiner’s analysis as provided above per claim 1; for example, the computer-relatedness of the claim as recited at this level of generality is not “a particular machine that is integral to the claim” per MPEP 2016.04(d)(2). Rather, merely reciting the words “apply it” or their equivalent with a judicial exception, or merely including instructions to implement an abstract idea on a computer or machine, or merely using a computer or a machine as a tool to perform an abstract idea, has been found not sufficient. See, e.g., MPEP 2106.05(f).
Excluding claim 15, the other dependent claims which depend from independent claim 1 are likewise rejected, because they are not deemed to materially change the outcome of the analysis as provided for the independent claims. The Examiner will address each dependent claim below, in turn:
Regarding claim 2, the claim is directed to a clarification as to what type of subject is subject to analysis via data gathering/collection and evaluation thereof. At best, the limitation clarifies a descriptive scope for the gathered/collected data without any sort of meaningful further active limitation that would serve to integrate the abstract idea of claims into a practical application or otherwise provide significantly more to the abstract idea.
Regarding claims 3-7, the claim is directed to a clarification as to define what data is subject to gathering/collection and evaluation thereof. At best, the limitation clarifies the type of data used as inputs to serve as a basis for realizing an output result of a mental process such as the one discussed above per claim, and hence does not constitute any sort of meaningful further active limitation that would serve to integrate the abstract idea of claims into a practical application or otherwise provide significantly more to the abstract idea.
Regarding claim 8, the claim is directed to a clarification as to define the richness or scope of what data is subject to gathering/collection and evaluation thereof. At best, the limitation clarifies aspects of the data (e.g., metadata essentially) used as inputs to serve as a basis for realizing an output result of a mental process such as the one discussed above per claim, and hence does not constitute any sort of meaningful further active limitation that would serve to integrate the abstract idea of claims into a practical application or otherwise provide significantly more to the abstract idea.
Regarding claims 9-10, the claim is directed to a clarification as to what type of model is used to implement the evaluation and recommendation features of the claimed invention. The Examiner reasons that the different types of models as recited provide differentiation as to what machine learning approach is used in a very general and high level of detail, which amounts to essentially what algorithms or steps are used to translate a set of inputs to a set of outputs. While this is a meaningful distinction in defining the model of the claimed invention, the limitation does not constitute any sort of further steps that would actively integrate the abstract idea claims into a practical application or otherwise provide significantly more to the abstract idea. At best, it merely serves to define what algorithm or what mental process constitutes the model and hence the claimed invention.
Regarding claim 11, the claim is directed to a particular type of machine learning model/approach, and hence the rationale provided above per claims 9-10 are reiterated here.
Regarding claim 12, the claim is directed to details about how the model may be trained, e.g. what hyperparameters and a tuning threshold for model sensitivity. Training a model at this level of generality is akin to iteratively modifying parameters for the model until accuracy/error standards are satisfied, and that involves an arrangement of mental steps that repeat. Hence, the provision of details that define the way in which these training-related mental steps are carried out is essentially an evaluation or a judgment defining a meta-aspect of training the model, with the model itself in the manner it is recited being a clear example of a mental process. Accordingly, a limitation that clarifies or defines this meta-aspect of training the model does not constitute any sort of meaningful further active limitation that would serve to integrate the abstract idea of claims into a practical application or otherwise provide significantly more to the abstract idea.
Regarding claim 13, the claim is directed to a particular type of machine learning model/approach, and hence the rationale provided above per claims 9-10 are reiterated here. The claim further details a range of how many trees and their depths, which only serve to define the size and scope of the mental process that is the model itself. While this is a meaningful distinction in defining the model of the claimed invention, the limitation does not constitute any sort of further steps that would actively integrate the abstract idea claims into a practical application or otherwise provide significantly more to the abstract idea.
Regarding claim 14, the claim is directed to clarifying a definition for what the output is based on the model’s evaluation of the inputs. While this is a meaningful distinction in defining the model of the claimed invention, the limitation does not constitute any sort of further steps that would actively integrate the abstract idea claims into a practical application or otherwise provide significantly more to the abstract idea. At best, it defines a context or a type of information that is the output of an evaluation or judgement.
Claim Rejections - 35 USC § 103
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office Action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
10. Claims 1-3, 5-10, 14-17, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2023/0045696 (“Griffin”) in view of Non-Patent Literature “Machine learning-based ABA treatment recommendation and personalization for autism spectrum disorder: an exploratory study” (“Kohli”).
Regarding claim 1, GRIFFIN teaches a method implemented via a computing device (a computing device as taught per FIG. 1 and [0023]-[0029] (“health evaluator” that includes memory, control circuitry, a UI, and so on, which may be implemented using a tablet or a smartphone as discussed per [0025])), the method comprising:
receiving, by the computing device, data associated with a subject having a healthcare condition (FIG. 2 step 22, as discussed per [0068] discussing the generation and receipt of health data as provided to the health evaluator, where the health data is used for evaluation, prediction, and recommendation aspects relating to a patient’s health ([0066])), the data associated with the subject comprising demographic data ([0033]: “The pertinent health data can include patient data, which is data about the specific patient (e.g., age, weight, height, ethnicity, race, economic status, lifestyle factors ...” and later “professional, marital status, age, sex, race, household factors ...”, where the Examiner equates the italicized portions discussed here per [0033] with the recitation for “demographic data”), schooling data, family medical data, prior therapy data ([0033]: “treatment data, which is data regarding treatment actions for the patient (e.g., one or more of lifestyle changes, additional or different medications, one or more procedures, frequency of follow ups, etc.)” as used for evaluation, prediction, and recommendation aspects relating to a patient’s health), observational assessment data, medication data ([0020]: “prior medications” are used to analyze the patient’s state of health), goals data, or combinations thereof, and
evaluating, by the computing device, the data associated with the subject via a ... treatment recommendation (TR) model, wherein the TR model is configured to evaluate the data associated with the subject to determine a therapy recommendation ... wherein the therapy recommendation comprises a standard of care (FIG. 2 steps 24-30 culminating with an analysis result that is “treatment information”, which [0085] clarifies to include “... health actions that correspond with improved patient health and/or with the patient achieving the modified health data associated with improved patient health. For example, the health evaluator can output one or more of treatment actions (e.g., prescriptions, doctor appointments, etc.) and patient actions (e.g., at least 30 minutes of exercise, diet changes, etc.)”, where the Examiner reasons that the taught “treatment actions” and “patient actions” read on the recited “therapy recommendation” comprising “a standard of care”) associated with provision of applied behavior analysis (ABA) therapy ([0110]: “The health evaluator 1 and method 32 provide significant advantages. The health evaluator can predict lab results and generate an accurate medical diagnosis for a patient based on the current health of the patient and based on modifications in the health of the patient. The modifications can provide information regarding treatment actions, patient actions, etc. that will result in an optimal patient health outcome. By using the predictive model to predict how patient behavior, medications and interventions affect future vital signs and lab results, in order to predict future diagnoses, the most beneficial patient behavioral changes, medications and interventions can be identified that will also minimize the number of diagnoses of that patient. The predictive health evaluator model is configured to predict how patient behavior, medications, and interventions affect future vital signs and lab results, in order to predict future diagnoses, the most beneficial patient behavioral changes, medications, and interventions that provide an optimal treatment plan. The health evaluator analyzes all pertinent health data to generate predictive results, providing decision support to clinicians that may not be fully aware of other actions regarding the patient's health. Individual clinicians can analyze and modify suggested treatments based on the outputs from health evaluator thereby accounting for all aspects of the patient's health.”) and
wherein the NDDTR model is a machine learning model selected from the group consisting of a deep learning model, a generative adversarial network model, a computational neural network model, a recurrent neural network model, a perceptron model, a classical tree-based machine learning model, a decision tree type model ([0050] discussing that the health predictor model can be an ensemble model of decision trees, as part of a larger discussion of the tree based model per [0050]-[0060]), a regression type model, a classification model, a reinforcement learning model, and combinations thereof ...
Griffin does not teach the further limitations of a subject having a healthcare condition that is a neurodevelopmental disorder (NDD) thereby making the treatment recommendation model per Griffin specifically a neurodevelopmental disorder treatment recommendation (NDDTR) model. Griffin notes a scope for its recommendation model that encompasses an open-ended breadth for diseases/conditions ([0117]) for which there could be patient health data subject to analysis thereof, but the open-ended list does not explicitly include NDDs as is recited. That said, the Examiner believes that NDDs, like many other conditions such as those per Griffin’s [0117], can be subject to characterization and model-driven analysis for recommendation purposes as Griffin does more broadly. The Examiner relies upon KOHLI to teach what Griffin explicitly lacks, see e.g. Kohli’s comparable recommendation system that considers patient similarity (section 2.3), using patient data, to generate a personalized treatment recommendation (sections 2.4-2.5) for a particular patient. See also its FIG. 1 on page 6 for a system/flow diagram. See Abstract, and also item #2 of participant inclusion criteria (found at the top of page 5): “Children should have a diagnosis of autism spectrum disorder using standardized instruments such as the DSM-V, CARS-2, ADI-R, INDT-ASD, ISAA, or any other evidence-based ASD diagnostic tool.”
Both Griffin and Kohli involve data-driven and model-based approaches to providing a personalized treatment recommendation to health care patients. Hence, they are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extend Griffin’s open-ended framework to include autism as a considered subject matter, with a reasonable expectation of success, e.g., to address the shortage of qualified personnel to help patients with an ASD diagnosis (as Kohli addresses in its Introduction section) using Griffin’s comparatively more deployment-ready and treatment-forward framework (e.g., relative to Kohli’s posture as more of a study).
Regarding claim 2, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the neurodevelopmental disorder is autism spectrum disorder (ASD) (Kohli: Abstract, and also item #2 of participant inclusion criteria (found at the top of page 5): “Children should have a diagnosis of autism spectrum disorder using standardized instruments such as the DSM-V, CARS-2, ADI-R, INDT-ASD, ISAA, or any other evidence-based ASD diagnostic tool.”). The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 3, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the data associated with the subject comprise the demographic data, wherein the demographic data comprise age data (Griffin’s [0033]: “The pertinent health data can include patient data, which is data about the specific patient (e.g., age, ...)” and later “professional, marital status, age, sex, race, household factors ...”; and Kohli: page 2, just before the start of section 2 discussing “We used assessment records, sociodemographic data such as age and gender ...”, and page 4, section 2.5, teaching “The patient’s age and vitals are analyzed to identify similar patients.”, and section 3.1, teaching “The sociodemographic data, including age, gender, ethnicity, and ASD diagnosis for participants, were collected”, and page 5, section 3.3 teaching “... We assume that children would benefit from ABA treatment goals that have shown success to an existing child with similar gender, age, and assessment scores ... ”). The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 5, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the data associated with the subject comprise the prior therapy data, wherein the prior therapy data comprise an indication of the subject having previously received occupational therapy, an indication of the subject having previously received speech therapy, an indication of duration of applied behavioral analysis (ABA) therapy previously received by the subject (Kohli: page 7, section 3.6: “Therefore, we investigated the relationship between patients’ treatment profiles, including sociodemographic data (age, gender), domain and target codes, treatment duration, and effectiveness (days to mastery) as interaction items and as an input vector to develop an effective treatment recommendation system using CF.”), an indication of amount of ABA therapy previously received by the subject (Kohli: page 7, section 3.6: “Therefore, we investigated the relationship between patients’ treatment profiles, including sociodemographic data (age, gender), domain and target codes, treatment duration, and effectiveness (days to mastery) as interaction items and as an input vector to develop an effective treatment recommendation system using CF.”), or combinations thereof. The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 6, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the data associated with the subject comprise an indication of the subject's tendency toward aggressive behavior, an indication of the subject's tendency toward stereotypy, an indication of the subject's tendency toward destructive behaviors, an indication of the consequences implemented by a caregiver of the subject responsive to negative behavior, an indication of the subject's ability to be understood, an indication of the subject's ability to understand others (Kohli’s patient similarity is based on scoring using assessments from SRS-2 and VB-MAPP, as discussed on page 7, column 1, paragraph 1, and further down the same page in the table for Algorithm 1 (see similar teaching in section 3.6’s first paragraph and in the last paragraph of section 5.1), and where the SRS-2 and VB-MAPP assessments (per Appendix 1 on pages 21-22) are understood to indicate “social communication” and “verbal and related skills ... measuring learning and language milestones” and “language acquisition”), an indication of variety of foods eaten by the subject, an indication of the subject's ability use a toilet independently, an indication of the subject's ability to bathe independently, an indication of stimulatory behaviors exhibited by the subject, or combinations thereof. The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 7, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the data associated with the subject comprise the goals data (Kohli: page 5, section 3.2 teaching “Parents used mobile or web applications to track their child’s progress, shared 10–15 min child’s progress videos weekly, and recorded responses to skill development treatment goals. At the start of months zero, four, and six, the children underwent a detailed SRS-2 and VB-MAPP assessment.”), wherein the goals data comprise an indication of a goal of improved communication skills (where the SRS-2 and VB-MAPP assessments (as clarified per Appendix 1 on pages 21-22) are understood to indicate “social communication” and “verbal and related skills ... measuring learning and language milestones” and “language acquisition”, i.e. the Examiner reasons that the goals as previously referenced when understood in relation to those skill assessment tools particularly are understood to include communication goals/skills), an indication of a goal of improved diet, an indication of a goal of increased independence, an indication of a goal of improved ability to express emotions, of combinations thereof. The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 8, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the data associated with the subject comprise structured data (electronic medical records, per Griffin’s [0017] and Kohli’s sections 2.3-2.4 and 3.5, e.g. as maintained in a database as shown in Kohli’s FIG. 1 on its page 6, would be understood to constitute “structured data” as recited), wherein the data associated with the subject comprise a plurality of data features, and wherein the plurality of data features comprises not more than 30 different data features (features per Griffin’s EMRs, as discussed at [0039]: “Each set of features can include tens, hundreds, or thousands of features”, i.e., the number of features can constitute a number in an incredibly large range that could be less than about 30 if just 2-3 instances of “tens ... of features” as Griffin teaches). The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 9, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the NDDTR model is a machine learning model selected from the group consisting of a deep learning model, a generative adversarial network model, a computational neural network model, a recurrent neural network model, a perceptron model, a classical tree-based machine learning model, a decision tree type model, a regression type model, a classification model, a reinforcement learning model, and combinations thereof (Griffin’s [0046] discussing that the model may be a neural network, decision tree model, a deep learning algorithm, a linear regression model, a classification model, etc.; and further, Kohli is understood to teach some version of a neural network model). The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 10, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the machine learning model is a gradient-boosted tree model comprising a plurality of weighted decision trees (Griffin’s [0046]: decision trees and boosted decision trees, and [0050]: “gradient boosted decision tree”, and [0056]: “The decision trees are weighted based on the predictive accuracy ...”). The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 14, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations wherein the standard of care associated with the provision of the ABA therapy comprises an indication of intensity of ABA therapy, an indication of ABA therapy services (Griffin: FIG. 2 steps 24-30 culminating with an analysis result that is “treatment information”, which [0085] clarifies to include “... health actions that correspond with improved patient health and/or with the patient achieving the modified health data associated with improved patient health. For example, the health evaluator can output one or more of treatment actions (e.g., prescriptions, doctor appointments, etc.) and patient actions (e.g., at least 30 minutes of exercise, diet changes, etc.)”, where the Examiner reasons that a duration of exercise as taught is an indication of intensity as recited, and further where the Examiner reasons that the taught “treatment actions” and “patient actions” read on services as recited, e.g. exercise and diet plans for which services are known to be involved at a user’s discretion, e.g. via a gym, a trainer, a dietician, a nutritionist, a chef, etc. (meaning, it would be obvious to employ a service such as those mentioned just now to pursue the recommend treatment plan)), or an indication of one of a comprehensive ABA therapy or a focused ABA therapy. The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 15, Griffin in view of Kohli teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitations further comprising providing therapy to the subject based upon the therapy recommendation (a recommendation of “personalized treatment prescriptions” per Kohli’s Introduction on its page 2 but see also Kohli’s sections 2.4-2.5 and 3.2 discussing more extensively what the personalized treatment could entail (which the Examiner reasons sufficiently reads on the high-level “providing therapy” of this instant limitation)). The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 16, the claim includes the same or similar limitations as claim 1 discussed above, and is therefore rejected under the same rationale. The instant claim specifically recites a processor and a non-transitory computer-readable medium, which is further taught per Griffin’s FIG. 1 and [0023]-[0029] (e.g., the taught “health evaluator” includes memory, control circuitry, etc. that read on these further recitations).
Regarding claim 17, the claim includes the same or similar limitations as claim 1 discussed above, and is therefore rejected under the same rationale. The instant claim specifically recites many of the same model aspects but with respect to its training and essentially its creation, which the Examiner believes Griffin sufficiently reads on, see e.g., Griffin’s Abstract (“A predictive patient health machine learning model is trained based on baseline health data configured as directed graphs. Patient-healthcare system encounter data formed at least in part by electronic medical records (EMRs) is gathered. The patient-healthcare system encounter data is configured as directed graphs to generate graphed health data and the predictive patient health machine learning model is trained on that graphed health data.”), with [0002], [0008], [0017], and [0021]-[0023] for example serving as restatements of the Abstract’s substance. The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 21, the claim includes limitations similar to that as discussed above in relation to claim 1, and is therefore rejected under the same rationale. Specifically, see the Examiner’s citation to Griffin’s [0050], discussing that the health predictor model can be an ensemble model of decision trees, as part of a larger discussion of the tree based model per [0050]-[0060] (thereby reading on the limitation that the model may consist of “a decision tree type model” as recited).
Regarding claim 22, Griffin in view of Kohli teach the method of claim 1, as discussed above. The instant claim further recites the additional limitations wherein the machine learning model is a gradient-boosted tree model comprising a plurality of weighted decision trees (Griffin’s [0050]: “... The health predictor model can generate a final prediction based on individual predictions made by multiple classification models forming the health predictor model. In a specific example, the health predictor model is formed based on gradient boosted decision trees. The health predictor model can be based on parallel decision tree boosting. In one example, the health predictor model utilizes the XGBoost algorithm during training of the health predictor model.”). The motivation for combining the references is as discussed above in relation to claim 1.
11. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Griffin in view of Kohli and further in view of Non-Patent Literature “Patient Outcomes After Applied Behavior Analysis for Autism Spectrum Disorder” (“Choi”).
Regarding claim 4, Griffin in view of Kohli teach the method of claim 1, as discussed above. The instant claim further recites the additional limitations wherein the data associated with the subject comprise the schooling data, wherein the schooling data comprise an indication of whether the subject attends school, an indication of whether the subject has been assigned a school aide, an indication of whether the subject is a part of a special education program, or combinations thereof. Regarding this, Griffin in view of Kohli teaches the consideration of a patient’s “educational background” (Kohli’s page 20, 2nd column, 1st full paragraph: “Further, sociodemographic characteristics such as age, gender, place of residence, access to healthcare, family income, and educational background can affect the treatment design and delivery. The above challenges can be overcome by designing a feature vector during the patient intake to capture diagnostic and functional assessment scores, age, gender, and other sociodemographic characteristics. At the intake stage, using the feature vector, the patient similarity model can compare incoming patients to an extensive patient database to recommend the most similar patients and correlate their treatment trajectory with outcomes, allowing physicians to select the ideal treatment strategy”), which could indicate “whether the subject attends school” (as recited in the instant limitation).
However, to the extent that Kohli’s teaching referenced above is not deemed sufficient, the Examiner further relies upon a more firm teaching found in Choi, see e.g., page 5, under the heading Predictor Variables and Covariates: “Our predictor variables were ABA dose and service history (past and current receipt of ABA, past and current receipt of special education ...” (which concretely reads on the recitation for “an indication of whether the subject is a part of a special education program”).
Like Kohli, Choi is directed to an evaluation of patient information in EHR/EMR data to identify patterns that are predictors for meaningful predictions for a similar subject population. Hence, they are similarly directed and therefore analogous. It would have been obvious to incorporate consideration of therapeutic treatment, as Choi specifically contemplates, in the same/like manner that Kohli and Griffin already more broadly/generally do, with a reasonable expectation of success, as a way to incorporate an additional piece of information that could serve as a meaningful predictor (as Choi purports) to further improve the model performance per Griffin and Kohli.
12. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Griffin in view of Kohli and further in view of Non-Patent Literature “How to Tune the Number and Size of Decision Trees with XGBoost in Python” (“Brownlee”).
Regarding claim 11, Griffin in view of Kohli teach the method of claim 10, as discussed above. The instant claim further recites the additional limitation wherein the gradient-boosted tree model comprises from 50 decision trees to 400 decision trees, and wherein the NDDTR model has a tree depth of at least 2 and not more than 6. The aforementioned references teach the use of gradient-boosted tree models (e.g., as discussed per claim 10, Griffin’s [0050] for example) and make some clarification that such tree models would have a “number of layers” per Griffin’s [0053] (which the Examiner equates with “tree depth”). That said, neither Griffin nor Kohli present a clear teaching as to the number of trees and a numerical representation or range thereof for tree depth. Rather, the Examiner relies upon BROWNLEE to teach what Griffin in view of Kohli otherwise lack, see e.g., Brownlee’s page 3 (contemplates a tunable best number of trees in a range spanning from 100 to 350) and page 4 (contemplates a tunable depth between 1 and 9). Respectfully, the ranges for tree number and tree depth as taught by the reference overlap with the ranges for both aspects as recited, therefore providing a teaching that can read on the limitation as recited.
Like Griffin, Brownlee is directed to a tree-based modeling approach. Hence, they are similarly directed and therefore analogous. It would have been obvious to incorporate Brownlee’s tunable aspects for its tree structures and model with Griffin’s modified framework, with a reasonable expectation of success, such as a way to permit tuning of the model as one generally does with model tuning to promote model efficiency, accuracy, and so forth.
13. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Griffin in view of Kohli and further in view of Non-Patent Literature “Goal-Oriented Sensitivity Analysis of Hyperparameters in Deep Learning” (“Novello”).
Regarding claim 12, Griffin in view of Kohli teach the method of claim 10, as discussed above. The aforementioned references teach the further limitation for identifying NDDTR model hyperparameters, wherein the NDDTR model hyperparameters comprise tree depth, number of decision trees, learning rate, scale positive weight, alpha regularization parameter, gamma regularization parameter, or combinations thereof (Griffin’s discussion of model hyperparameters, inclusive of “tree layers” (i.e., “tree depth” as recited), per [0053]) and certainly using the taught hyperparameters for tuning the NDDTR model hyperparameters ... (Examiner: that is the point of hyperparameters), but not the further limitation wherein the tuning of the NDDTR model hyperparameters is effective to provide for an NDDTR model sensitivity of from 0.75 to 0.99. Rather, the Examiner relies upon NOVELLO to teach what Griffin etc. otherwise lack, see e.g., Novello’s page 20, section 5.1.1, discussing an accuracy going up to about 99%.
Like Griffin, Novello is directed to a comparable modelling approach that involves hyperparameter tuning of the model. Hence, they are similarly directed and therefore analogous. It would have been obvious to incorporate Novello’s sensitivity analysis with Griffin’s modified framework, with a reasonable expectation of success, to promote accuracy in the model as Novello teaches with its sensitivity analysis.
14. Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Griffin in view of Kohli and further in view of WO 2022051674 A1 (“Becich”).
Regarding claim 19, Griffin in view of Kohli and further in view of Becich teach the method of claim 17, as discussed above. The aforementioned references teach the further limitations wherein the feature selection method further comprises (1) evaluating the area under the receiver operator characteristic curve (AUROC) of each single data feature (Becich’s [0005] discussing AUROC in relation to improved performance and sensitivity); (2) removing data features that yield single feature AUROC values of equal to or less than 0.55 (Becich’s AUROC threshold appears to be at least 0.70 if not higher (based on [0027]-[0031] and [0034]-[0038]), hence it stands to reason that values below that would not be subject to selection); (3) evaluating the AUROC of the combined remaining data features and (4) iteratively training the NDDTR model by removing one data feature at a time with replacement from the data features remaining in the training dataset (Becich’s [00119] generally discussing iterative training and more specifically [00313] discussing forward selection iteration), wherein the NDDTR model is trained using cross-validation, and wherein feature subsets are not reshuffled between folds (Becich’s [00345] discussing an approach using folds that is silent to reshuffling for purposes of cross validation). The motivation for combining the references is as discussed above in relation to claim 18.
Regarding claim 20, Griffin in view of Kohli and further in view of Becich teach the method of claim 19, as discussed above. The aforementioned references teach the additional limitations further comprising eliminating one or more of the data features causing the highest increase in mean cross-validation AUROC when removed (Becich’s [00313] discussing sequential removal of features during feature selection as part of cross validation, based on optimization using AUROC). The motivation for combining the references is as discussed above in relation to claim 18.
15. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Griffin in view of Kohli and Novello and further in view of Brownlee and CN 113609841 (“Li”).
Regarding claim 13, Griffin in view of Kohli and Novello teach the method of claim 12, as discussed above. The aforementioned references teach the entirety of the further limitations, and hence the Examiner relies upon further references Brownlee and LI to teach what they lack:
A further limitation wherein the gradient-boosted tree model comprises from 50 decision trees to 150 decision trees (as discussed per claim 10, Griffin’s [0050] for example teaches the use of gradient-boosted trees, but is silent as to how many, and rather the Examiner relies upon Brownlee to teach that, see e.g., Brownlee’s page 3 (contemplates a tunable best number of trees in a range spanning from 100 to 350)).
A motivation for modifying Griffin in view of Brownlee for this same/similar limitation has been discussed above in relation to claim 11.
A further limitation wherein the NDDTR model has a tree depth of not more than 3 (Griffin’s tree models would have a “number of layers”, per Griffin’s [0053] (which the Examiner equates with “tree depth”), but is silent as to how many layers, and rather the Examiner relies upon Brownlee to teach that, see e.g., Brownlee’s page 4 (contemplates a tunable depth between 1 and 9));
A motivation for modifying Griffin in view of Brownlee for this same/similar limitation has been discussed above in relation to claim 11.
A further limitation wherein the NDDTR model has a learning rate of equal to or less than 0.4 (learning rate/parameter taught by Li, page 11: “parameter learning-rate=0.01, the proper adjustment of the parameter is good for improving the precision of the model”);
Like Griffin etc., Li is directed to creating a model and tuning it appropriately via various parameters, thereby promoting accuracy and performance in use of the model. Hence, they are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Li’s tuning aspects, as discussed here, into Griffin’s modified framework, with a reasonable expectation of success, so as to improve model performance and accuracy.
A further limitation wherein the NDDTR model has a scale positive weight of from 0.1 to 10 (weighting according to scale as taught by Li, page 11: “parameter scale-pos-weight t= 1, setting this value is because the class is not balanced”);
A motivation for modifying Griffin in view of Li for this same/similar limitation has been discussed just above.
A further limitation wherein the NDDTR model has an alpha regularization parameter of from 0 to 1 (gradient of alpha of the learning parameter as taught by Li, page 11: “parameter reg-alpha=0.005, the parameter represents the weight of L1 regularization item, applying in the condition of higher dimension, the speed of the model is faster”); and
A motivation for modifying Griffin in view of Li for this same/similar limitation has been discussed just above.
A further limitation wherein the NDDTR model has a gamma regularization parameter of from 0 to 1 (a gramma regularization taught by Li, page 11: “parameter gamma=0, when the node is split, only the value of the loss function after splitting is reduced, then splitting the node. The gamma specifies a minimum loss function drop value required for node splitting. The larger the value of the parameter, the more conservative model. The value of this parameter is associated with the loss function.”).
A motivation for modifying Griffin in view of Li for this same/similar limitation has been discussed just above.
Response to Arguments
16. Applicants’ arguments have been fully considered but they are not persuasive.
See the Examiner’s reformulated rejection under 35 U.S.C. 101, which now corresponds to Applicants’ amended claims.
See the Examiner’s citation to Griffin, corresponding to Applicants’ amendments as incorporated into the obviousness rejection and its mappings as provided above.
Conclusion
17. The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure:
US 2022/0043878 Jackson
CN 112215329 A Liu
Non-Patent Literature “Machine learning model to predict mental health crises from electronic health records”
18. Applicants’ amendment necessitated the new ground(s) of rejection presented in this Office Action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicants are reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
19. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHOURJO DASGUPTA whose telephone number is (571)272-7207. The examiner can normally be reached M-F 8am-5pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHOURJO DASGUPTA/Primary Examiner, Art Unit 2144