Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are e.g., module configured for in claim 1; module, module is configured to claims 6-11; means for in claim 16.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: claims 1-20 are directed to either a process, machine, manufacture or composition of matter.
With respect to claims 1, 16, 18:
2A Prong 1:
analyze the multiple types of data (encompasses mental observations or evaluations, e.g., a computer programmer’s mental identification of data);
determine multiple predictions of likelihoods of the user getting a neurological disease (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data);
determine a single result indicating a likelihood of the user getting the neurological disease (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data);
based on the multiple predictions (using AI to predict amounts to a mental process in the same way that a human can predict the weather with or without a computer).
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
a source module, apparatus (computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component; the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 134 S. Ct. at 2358);
receive multiple types of data for a user (mere data gathering and output recited at a high level of generality - insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g));
machine learning module (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f));
using a model to predict… (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of a previously trained model to make a prediction).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
a source module, apparatus (computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component; the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 134 S. Ct. at 2358);
receive multiple types of data for a user (mere data gathering and output recited at a high level of generality - insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g));
machine learning module (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f));
using a model to predict… (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of a previously trained model to make a prediction);
Further, the receiving/transmitting steps were considered to be extra-solution activity in Step 2A Prong 2, and thus it is re-evaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The receiving and/or transmitting limitations constitute extra-solution activity. See buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014) ("That a computer receives and sends the information over a network-with no further specification-is not even arguably inventive."). The court decisions cited in MPEP 2106.05(d)(II) indicate that merely Receiving and/or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). Thereby, a conclusion that the claimed receiving/transmitting steps are well-understood, routine, conventional activity is supported under Berkheimer. The claim is not patent eligible.
2. The apparatus of claim 1, wherein at least one of the multiple types of data comprises image data of a brain of the user (further expand mental process user can perform a mental process of modeling with assistance of pen and paper).
3. The apparatus of claim 2, wherein the image data comprises magnetic resonance images of the brain of the user(further expand mental process user can perform a mental process of modeling with assistance of pen and paper).
4. The apparatus of claim 2, wherein at least one of the multiple types of data comprises volumetric data for the brain of the user(further expand mental process user can perform a mental process of modeling with assistance of pen and paper).
5. The apparatus of claim 4, wherein at least one of the multiple types of data comprises an evaluation of the user by a medical professional(further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data).
6, 17, 20. The apparatus of claim 5, wherein the machine learning module is configured to analyze the evaluation of the user by a medical professional (further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data) using a random forest, to analyze the volumetric data of the brain of the user using K nearest neighbors, and to analyze the image data of the brain of the user using a convolutional neural network comprising a residual neural network (additional element considered to be generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h)).
7. The apparatus of claim 1, wherein the multi-modal result module is configured to determine the user is likely to get the neurological disease in response to one or more of the evaluation of the user by the medical professional indicating the user is likely to get the neurological disease, and both the image data of the brain of the user and the volumetric data for the brain of the user indicating the user is likely to get the neurological disease (further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data).
8. The apparatus of claim 1, wherein the multi-modal result module is configured to determine the user is likely to get the neurological disease in response to at least one of the multiple predictions indicating the user is likely to get the neurological disease (further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data).
9. The apparatus of claim 1, wherein the multi-modal result module is configured to determine the user is likely to get the neurological disease in response to a majority of the multiple predictions indicating the user is likely to get the neurological disease (further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data).
10. The apparatus of claim I, wherein the multi-modal result module is configured to determine the user is likely to get the neurological disease in response to all of the multiple predictions indicating the user is likely to get the neurological disease (further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data).
11. The apparatus of claim 1, wherein the multi-modal result module is configured to determine the single result indicating the likelihood of the user getting the neurological disease by processing the multiple types of data for the user with machine learning comprising a decision tree (additional element considered to be generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h)).
12. The apparatus of claim 1, wherein the neurological disease comprises Alzheimer's disease(further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data).
13. The apparatus of claim 1, further comprising an interface module (computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component; the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 134 S. Ct. at 2358) configured to execute on a computing device of a medical professional evaluating the user(further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data).
14. The apparatus of claim 13, wherein the source module is configured to receive the multiple types of data for the user through a user interface (computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component; the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 134 S. Ct. at 2358) of the interface module displayed on an electronic display screen (computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component) of the computing device and to provide the multiple types of data to the machine learning module using an application programming interface.
15. The apparatus of claim 14, wherein the interface module is configured to receive the single result indicating the likelihood of the user getting the neurological disease from the multi-modal result module over the application programming interface(computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component) and to display the single result in the user interface on the electronic display screen of the computing device(computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component).
19. The method of claim 18, wherein the multiple types of data for the user comprise image data of a brain of the user, volumetric data for the brain of the user, and an evaluation of the user by a medical professional(further expand mental process user can perform a mental process of modeling with assistance of pen and paper; A human- mind with pen and paper can generate/determine data).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-5, 7-14, 16, 18-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Becich (US 2023/0266342).
Becich discloses:
1. An apparatus comprising:
a source module configured to receive “multiple types of data” (not further defined) for a user (different types of data, measurements, images to train models to predict disease, blood tests, MRI volumetric data, “The terms “marker,” “markers,” “biomarker,” and “biomarkers” encompass, without limitation, lipids, lipoproteins, proteins, cytokines, chemokines, growth factors, peptides, nucleic acids, genes, and oligonucleotides, together with their related complexes, metabolites, mutations, variants, polymorphisms, modifications, fragments, subunits, degradation products, elements, and other analytes or sample-derived measures. A marker can also include mutated proteins, mutated nucleic acids, variations in copy numbers, and/or transcript variants, in circumstances in which such mutations, variations in copy number and/or transcript variants are useful for generating a predictive model, or are useful in predictive models developed using related markers (e.g., non-mutated versions of the proteins or nucleic acids, alternative transcripts, etc.”, 0089;
“The model training module 150 trains one or more predictive models, each predictive model receiving, as input, one or more biomarkers. In various embodiments, the model training module 150 constructs a predictive model that receives, as input, expression values of two biomarkers. In various embodiments, the model training module 150 constructs a predictive model that receives, as input, expression values of three biomarkers”, 0119;
“MRI images can be separated into 4 subsets (e.g., brain parenchymal fraction quartiles), where the
first subset includes MRI images with the lowest range of brain parenchymal fraction values, the
second subset includes MRI images with the next lowest range of brain parenchymal fraction values, the
third subset includes MM images with the third lowest range of brain parenchymal fraction values, and the
fourth subset includes MRI images with the highest range of brain parenchymal fraction values. Thus, by training the predictive model using these reference ground truth scores, the predictive model can be trained to predict different classes according to predicted brain parenchymal fraction values”, 0114);
a machine learning “module” (reads on software, hardware, firmware, models, neural networks, etc.) configured to analyze the multiple types of data (0089, 0114, 0119) using machine learning to determine multiple predictions of likelihoods of the user getting a neurological disease (machine learning models to predict Alzheimer’s disease/MS, 0082, 0170, Table 5; “the predictive model is any one of a regression model (e.g., linear regression, logistic regression, or polynomial regression), decision tree, random forest, support vector machine, Naïve Bayes model, k-means cluster, or neural network (e.g., feed-forward networks, convolutional neural networks (CNN), deep neural networks (DNN), autoencoder neural networks, generative adversarial networks, or recurrent networks (e.g., long short-term memory networks (LSTM), bi-directional recurrent networks, deep bi-directional recurrent networks), linear mixed effects (LME) model, or any combination thereof. For example, the predictive model can be a stacked classifier that includes both a linear regression and decision tree”, 0116;
“the assessment of disease activity corresponds to a risk (e.g., likelihood) of the subject developing a disease at a subsequent time. In various embodiments, the assessment (e.g., predicted score) corresponding to the subject is compared to multiple scores”, 0145;
“The model training module 150 trains one or more predictive models, each predictive model receiving, as input, one or more biomarkers. In various embodiments, the model training module 150 constructs a predictive model that receives, as input, expression values of two biomarkers. In various embodiments, the model training module 150 constructs a predictive model that receives, as input, expression values of three biomarkers”, 0119;
“FIG. 10 depicts a multivariate analysis of a combination of biomarkers for predicting brain parenchymal fraction. Here, an ensemble of LME models were constructed, the LME models predicting BPF using the 20 protein biomarkers, age, disease duration, and sex using Leave One Out Cross Validation (LOOCV) to minimize overfitting. The importance of each feature from each model was extracted across LOOCV splits. FIG. 10 depicts the performance of the multivariate model performance which analyzes quantitative protein concentrations and demographic information”, 0470;
“MRI images can be separated into 4 subsets (e.g., brain parenchymal fraction quartiles), where the first subset includes MRI images with the lowest range of brain parenchymal fraction values, the second subset includes MRI images with the next lowest range of brain parenchymal fraction values, the third subset includes MM images with the third lowest range of brain parenchymal fraction values, and the fourth subset includes MRI images with the highest range of brain parenchymal fraction values. Thus, by training the predictive model using these reference ground truth scores, the predictive model can be trained to predict different classes according to predicted brain parenchymal fraction values”, 0114); and
a multi-modal result module configured to determine a single result indicating a likelihood of the user getting the neurological disease based on the multiple predictions (multi-modal is not further defined and reads on one or more of the models/classifiers receiving a plurality of data and/or a plurality of predictive models being used to predict disease or score the likelihood of disease, 0116-0118, 0127; “FIG. 10 depicts a multivariate analysis of a combination of biomarkers for predicting brain parenchymal fraction. Here, an ensemble of LME models were constructed, the LME models predicting BPF using the 20 protein biomarkers, age, disease duration, and sex using Leave One Out Cross Validation (LOOCV) to minimize overfitting. The importance of each feature from each model was extracted across LOOCV splits. FIG. 10 depicts the performance of the multivariate model performance which analyzes quantitative protein concentrations and demographic information”, 0470; “predictive models for determining multiple sclerosis disease activity (e.g., multiple sclerosis disease progression) in human subjects based on the quantitative expression values of the markers”, abstract).
2. The apparatus of claim 1, wherein at least one of the multiple types of data comprises image data of a brain of the user (MRI volumetric data are one type of data that is used as input data, “analyzing expression levels of biomarkers, in conjunction with MRI volumetrics or just the biomarkers alone, in samples obtained from the subject can enable earlier detection and monitoring of MS disease progression.”, 0002; “MRI images can be analyzed and separated into different subsets according to brain parenchymal fraction values of the MRI images. For example, MRI images can be separated into 4 subsets (e.g., brain parenchymal fraction quartiles), where the first subset includes MRI images with the lowest range of brain parenchymal fraction values, the second subset includes MRI images with the next lowest range of brain parenchymal fraction values”, 0114).
3. The apparatus of claim 2, wherein the image data comprises magnetic resonance images of the brain of the user (MRI, e.g., 0002, 0114).
4. The apparatus of claim 2, wherein at least one of the multiple types of data comprises volumetric data for the brain of the user (“analyzing expression levels of biomarkers, in conjunction with MRI volumetrics or just the biomarkers alone, in samples obtained from the subject can enable earlier detection and monitoring of MS disease progression.”, 0002, 0087, 0270).
5. The apparatus of claim 4, wherein at least one of the multiple types of data comprises an evaluation of the user by a medical professional (inherent, only doctors or medical staff write prescriptions for expensive test like MRIs and interpret tests such as MRIs, 0002; “Examples of medical professionals include physicians, emergency medical technicians, nurses, first responders, psychologists, phlebotomist, medical physics personnel, nurse practitioners, surgeons, dentists, and any other obvious medical professional as would be known to one skilled in the art”, 0096).
7. The apparatus of claim 1, wherein the multi-modal result (e.g., 0470) module is configured to determine the user is likely to get the neurological disease in response to one or more of the evaluation of the user by the medical professional indicating the user is likely to get the neurological disease (0002, 0096), and both the image data of the brain of the user and the volumetric data for the brain of the user indicating the user is likely to get the neurological disease (“analyzing expression levels of biomarkers, in conjunction with MRI volumetrics or just the biomarkers alone, in samples obtained from the subject can enable earlier detection and monitoring of MS disease progression.”, 0002, 0087, 0270, 0114).
8. The apparatus of claim 1, wherein the multi-modal result module is configured to determine the user is likely to get the neurological disease in response to at least one of the multiple predictions indicating the user is likely to get the neurological disease (MRI images, score and/or biomarkers are used by predictive models 0114; 0116-0118 to predict disease, 0124-0128).
9. The apparatus of claim 1, wherein the multi-modal result module is configured to determine the user is likely to get the neurological disease in response to a majority (reads on using thresholds, confidence, likelihood, predictive models) of the multiple predictions indicating the user is likely to get the neurological disease(MRI images, score and/or biomarkers are used by predictive models 0114; 0116-0118 to predict disease, 0124-0128; ensemble of models, 0470).
10. The apparatus of claim I, wherein the multi-modal result module is configured to determine the user is likely to get the neurological disease AD/MS) in response to all of the multiple predictions (multiple images, biomarkers, score, test results, etc., 0114, 0470) indicating the user is likely to get the neurological disease(MRI images, score and/or biomarkers are used by predictive models 0116-0118 to predict disease, 0124-0128).
11. The apparatus of claim 1, wherein the multi-modal result module is configured to determine the single result indicating the likelihood of the user getting the neurological disease by processing the multiple types of data for the user with machine learning comprising a decision tree (MRI images, score and/or biomarkers are used by predictive models 0116-0118 to predict disease, 0124-0128; “the predictive model is any one of a regression model (e.g., linear regression, logistic regression, or polynomial regression), decision tree, random forest, support vector machine, Naïve Bayes model, k-means cluster, or neural network (e.g., feed-forward networks, convolutional neural networks (CNN), deep neural networks (DNN), autoencoder neural networks, generative adversarial networks, or recurrent networks (e.g., long short-term memory networks (LSTM), bi-directional recurrent networks, deep bi-directional recurrent networks), linear mixed effects (LME) model, or any combination thereof. For example, the predictive model can be a stacked classifier that includes both a linear regression and decision tree”, 0116).
12. The apparatus of claim 1, wherein the neurological disease comprises Alzheimer's disease (“The term “disease activity” encompasses the disease activity of any neurodegenerative disease including multiple sclerosis, Parkinson's Disease, Lewy body disease, Alzheimer's Disease, Amyotrophic lateral sclerosis (ALS), motor neuron disease, Huntington's Disease, Spinal muscular atrophy, Friedreich's ataxia, Batten disease”, 0082, 0170).
13. The apparatus of claim 1, further comprising an interface module configured to execute on a computing device of a medical professional evaluating the user (inherent, only doctors or medical staff interpret MRIs, 0002; “Examples of medical professionals include physicians, emergency medical technicians, nurses, first responders, psychologists, phlebotomist, medical physics personnel, nurse practitioners, surgeons, dentists, and any other obvious medical professional as would be known to one skilled in the art”, 0096; MRIs are inherently interpreted on computing devices, Fig. 8).
14. The apparatus of claim 13, wherein the source module is configured to receive the multiple types of data (MRI images, test results, biomarkers, etc. 0002; Fig. 8; “In various embodiments, the reference score further corresponds to a mild/moderate MS disease progression or a severe MS disease progression. In various embodiments, the expression levels of the plurality of biomarkers is determined from a test sample obtained from the subject. In various embodiments, the test sample is a blood or serum sample. In various embodiments, the subject has multiple sclerosis, is suspected of having multiple sclerosis, or was previously diagnosed with multiple sclerosis. In various embodiments, obtaining or having obtained the dataset comprises performing an immunoassay to determine the expression levels of the plurality of biomarkers.”, 0028) for the user through a user interface of the interface module displayed on an electronic display screen (e.g., 818, Fig. 8) of the computing device and to provide the multiple types of data to the machine learning module using an application programming interface (using GUI, interface, etc. “The input interface 814 is a touch-screen interface, a mouse, track ball, or other type of pointing device, a keyboard, or some combination thereof, and is used to input data into the computer 800. In some embodiments, the computer 800 may be configured to receive input (e.g., commands) from the input interface 814 via gestures from the user. The graphics adapter 812 displays images and other information on the display 818. The network adapter 816 couples the computer 800 to one or more computer network”, 0285).
16. An apparatus comprising:
means for receiving multiple types of data for a user (MRI images, score and/or biomarkers are used by predictive models 0116-0118 to predict disease, 0124-0128);
means for analyzing the multiple types of data using machine learning to determine multiple predictions of likelihoods of the user getting a neurological disease (“the predictive model is any one of a regression model (e.g., linear regression, logistic regression, or polynomial regression), decision tree, random forest, support vector machine, Naïve Bayes model, k-means cluster, or neural network (e.g., feed-forward networks, convolutional neural networks (CNN), deep neural networks (DNN), autoencoder neural networks, generative adversarial networks, or recurrent networks (e.g., long short-term memory networks (LSTM), bi-directional recurrent networks, deep bi-directional recurrent networks), linear mixed effects (LME) model, or any combination thereof. For example, the predictive model can be a stacked classifier that includes both a linear regression and decision tree.”, 0116); and
means for determining a single result indicating a likelihood of the user getting the neurological disease based on the multiple predictions (plurality of predictive models being used to predict disease or score the likelihood of disease, 0116-0118, 0127; “FIG. 10 depicts a multivariate analysis of a combination of biomarkers for predicting brain parenchymal fraction. Here, an ensemble of LME models were constructed, the LME models predicting BPF using the 20 protein biomarkers, age, disease duration, and sex using Leave One Out Cross Validation (LOOCV) to minimize overfitting. The importance of each feature from each model was extracted across LOOCV splits. FIG. 10 depicts the performance of the multivariate model performance which analyzes quantitative protein concentrations and demographic information”, 0470; “predictive models for determining multiple sclerosis disease activity (e.g., multiple sclerosis disease progression) in human subjects based on the quantitative expression values of the markers”, abstract).
18. A method comprising: receiving multiple types of data for a user(MRI images, score and/or biomarkers are used by predictive models 0116-0118 to predict disease, 0124-0128);
analyzing the multiple types of data using machine learning to determine multiple predictions of likelihoods of the user getting a neurological disease (“the predictive model is any one of a regression model (e.g., linear regression, logistic regression, or polynomial regression), decision tree, random forest, support vector machine, Naïve Bayes model, k-means cluster, or neural network (e.g., feed-forward networks, convolutional neural networks (CNN), deep neural networks (DNN), autoencoder neural networks, generative adversarial networks, or recurrent networks (e.g., long short-term memory networks (LSTM), bi-directional recurrent networks, deep bi-directional recurrent networks), linear mixed effects (LME) model, or any combination thereof. For example, the predictive model can be a stacked classifier that includes both a linear regression and decision tree.”, 0116); and determining a single result indicating a likelihood of the user getting the neurological disease based on the multiple predictions (plurality of predictive models being used to predict disease or score the likelihood of disease, 0116-0118, 0127; “FIG. 10 depicts a multivariate analysis of a combination of biomarkers for predicting brain parenchymal fraction. Here, an ensemble of LME models were constructed, the LME models predicting BPF using the 20 protein biomarkers, age, disease duration, and sex using Leave One Out Cross Validation (LOOCV) to minimize overfitting. The importance of each feature from each model was extracted across LOOCV splits. FIG. 10 depicts the performance of the multivariate model performance which analyzes quantitative protein concentrations and demographic information”, 0470; “predictive models for determining multiple sclerosis disease activity (e.g., multiple sclerosis disease progression) in human subjects based on the quantitative expression values of the markers”, abstract).
19. The method of claim 18, wherein the multiple types of data for the user comprise image data of a brain of the user (MRI), volumetric data for the brain of the user(“analyzing expression levels of biomarkers, in conjunction with MRI volumetrics or just the biomarkers alone, in samples obtained from the subject can enable earlier detection and monitoring of MS disease progression.”, 0002, 0087, 0270), and an evaluation of the user by a medical professional (inherent, only doctors or medical staff interpret MRIs, 0002; “Examples of medical professionals include physicians, emergency medical technicians, nurses, first responders, psychologists, phlebotomist, medical physics personnel, nurse practitioners, surgeons, dentists, and any other obvious medical professional as would be known to one skilled in the art”, 0096).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6, 17, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Becich, as shown above, in view of Guo (US 2021/0150671).
6. The apparatus of claim 5, wherein the machine learning module is configured to analyze the evaluation of the user by a medical professional (0096) using a random forest (“the predictive model is any one of a regression model (e.g., linear regression, logistic regression, or polynomial regression), decision tree, random forest, support vector machine, Naïve Bayes model, k-means cluster, or neural network (e.g., feed-forward networks, convolutional neural networks (CNN), deep neural networks (DNN), autoencoder neural networks, generative adversarial networks, or recurrent networks (e.g., long short-term memory networks (LSTM), bi-directional recurrent networks, deep bi-directional recurrent networks), linear mixed effects (LME) model, or any combination thereof.”, 0116-7, 0122, 0326), to analyze the volumetric data of the brain of the user using K nearest neighbors (0116-7), and to analyze the image data of the brain of the user using a convolutional neural network (0116) comprising a residual neural network.
Becich fails to particularly call for using a residual NN.
Guo teaches residual NNs are well-known (“The performance of the U-Net with only attention units (“AttU-Net”), U-Net with only residual units (“ResU-Net”) and the U-Net with both residual and attention unit (“ResAttU-Net”) were analyzed with the input of Pre+Low image. The exemplary deep learning architecture included the out-stand five-layer ResAttU-Net as illustrated in FIG. 2. In particular, FIG. 2 shows an exemplary diagram illustrating the exemplary ResAttU-Net architecture according to an exemplary embodiment of the present disclosure. The exemplary ResAttU-Net architecture includes a contraction path that can encode high resolution data into low resolution representations, and an expansion path that can decode such encoded representations back to high-resolution images”, 0094-5; 0009, 0084, 0125).
It would have been obvious to combine the references before the effective filing date because they are in the same field of endeavor and adding ResU-Net for the purpose of encoding high resolution images.
17. The apparatus of claim 16, wherein the multiple types of data for the user comprise image data of a brain (MRI) of the user, volumetric data for the brain of the user(“analyzing expression levels of biomarkers, in conjunction with MRI volumetrics or just the biomarkers alone, in samples obtained from the subject can enable earlier detection and monitoring of MS disease progression.”, 0002, 0087, 0270), and an evaluation of the user by a medical professional(inherent, only doctors or medical staff interpret MRIs, 0002; “Examples of medical professionals include physicians, emergency medical technicians, nurses, first responders, psychologists, phlebotomist, medical physics personnel, nurse practitioners, surgeons, dentists, and any other obvious medical professional as would be known to one skilled in the art”, 0096), and wherein the machine learning comprises a random forest to analyze the evaluation of the user by the medical professional, a K nearest neighbor classifier to analyze the volumetric data of the brain of the user, and a convolutional neural network disease (“the predictive model is any one of a regression model (e.g., linear regression, logistic regression, or polynomial regression), decision tree, random forest, support vector machine, Naïve Bayes model, k-means cluster, or neural network (e.g., feed-forward networks, convolutional neural networks (CNN), deep neural networks (DNN), autoencoder neural networks, generative adversarial networks, or recurrent networks (e.g., long short-term memory networks (LSTM), bi-directional recurrent networks, deep bi-directional recurrent networks), linear mixed effects (LME) model, or any combination thereof. For example, the predictive model can be a stacked classifier that includes both a linear regression and decision tree.”, 0116) comprising a residual neural network to analyze the image data of the brain.
Becich fails to particularly call for using a residual NN.
Guo teaches residual NNs are well-known (“The performance of the U-Net with only attention units (“AttU-Net”), U-Net with only residual units (“ResU-Net”) and the U-Net with both residual and attention unit (“ResAttU-Net”) were analyzed with the input of Pre+Low image. The exemplary deep learning architecture included the out-stand five-layer ResAttU-Net as illustrated in FIG. 2. In particular, FIG. 2 shows an exemplary diagram illustrating the exemplary ResAttU-Net architecture according to an exemplary embodiment of the present disclosure. The exemplary ResAttU-Net architecture includes a contraction path that can encode high resolution data into low resolution representations, and an expansion path that can decode such encoded representations back to high-resolution images”, 0094-5; 0009, 0084, 0125).
It would have been obvious to combine the references before the effective filing date because they are in the same field of endeavor and adding ResU-Net for the purpose of encoding high resolution images.
20. The method of claim 19, wherein the machine learning comprises a random forest to analyze the evaluation of the user by the medical professional(inherent, only doctors or medical staff interpret MRIs, 0002; “Examples of medical professionals include physicians, emergency medical technicians, nurses, first responders, psychologists, phlebotomist, medical physics personnel, nurse practitioners, surgeons, dentists, and any other obvious medical professional as would be known to one skilled in the art”, 0096), a K nearest neighbor classifier to analyze the volumetric data (0002, 0087, 0270), of the brain of the user, and a convolutional neural network (“the predictive model is any one of a regression model (e.g., linear regression, logistic regression, or polynomial regression), decision tree, random forest, support vector machine, Naïve Bayes model, k-means cluster, or neural network (e.g., feed-forward networks, convolutional neural networks (CNN), deep neural networks (DNN), autoencoder neural networks, generative adversarial networks, or recurrent networks (e.g., long short-term memory networks (LSTM), bi-directional recurrent networks, deep bi-directional recurrent networks), linear mixed effects (LME) model, or any combination thereof. For example, the predictive model can be a stacked classifier that includes both a linear regression and decision tree.”, 0116) comprising a residual neural network to analyze the image data of the brain.
Becich fails to particularly call for using a residual NN.
Guo teaches residual NNs are well-known (“The performance of the U-Net with only attention units (“AttU-Net”), U-Net with only residual units (“ResU-Net”) and the U-Net with both residual and attention unit (“ResAttU-Net”) were analyzed with the input of Pre+Low image. The exemplary deep learning architecture included the out-stand five-layer ResAttU-Net as illustrated in FIG. 2. In particular, FIG. 2 shows an exemplary diagram illustrating the exemplary ResAttU-Net architecture according to an exemplary embodiment of the present disclosure. The exemplary ResAttU-Net architecture includes a contraction path that can encode high resolution data into low resolution representations, and an expansion path that can decode such encoded representations back to high-resolution images”, 0094-5; 0009, 0084, 0125).
It would have been obvious to combine the references before the effective filing date because they are in the same field of endeavor and adding ResU-Net for the purpose of encoding high resolution images.
Claim Rejections - 35 USC § 103
Claim(s) 15 is rejected under 35 U.S.C. 103 as being unpatentable over Becich in view of Koller (US 2021/0366577).
15. The apparatus of claim 14, wherein the interface module is configured to receive the single result indicating the likelihood of the user getting the neurological disease (AD/MS) from the multi-modal result module (models, ensemble receiving plurality of data) over the application programming interface (API) and to display (displays for medical staff, Fig. 8) the single result in the user interface on the electronic display screen of the computing device.
Becich fails to particularly call for APIs.
Koller teaches APIs (“The methods described above, including the methods of training and deploying a cellular disease model, are, in some embodiments, performed on a computing device. Examples of a computing device can include a personal computer, desktop computer laptop, server computer, a computing node within a cluster, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, and the like.”, 0461; “he clinical phenotype system 204 communicates with third party entities 702A or 702B through one or more application programming interfaces (API) 706.”, 0477-0479).
It would have been obvious to combine the references before the effective filing date because they are in the same field of endeavor and it is well-known to perform telemedicine and use mobile/remote devices comprising APIs so that patients can get help quickly or be monitored remotely.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID R VINCENT whose telephone number is (571)272-3080. The examiner can normally be reached ~Mon-Fri 12-8:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at 5712703428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID R VINCENT/Primary Examiner, Art Unit 2123