Prosecution Insights
Last updated: April 19, 2026
Application No. 17/788,486

METHODS AND SYSTEMS FOR DIAGNOSIS OF MYALGIC ENCEPHALOMYELITIS/CHRONIC FATIGUE SYNDROME (ME/CFS) FROM IMMUNE MARKERS

Final Rejection §102§103
Filed
Jun 23, 2022
Examiner
FERNANDEZ RIVAS, OMAR F
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
The Jackson Laboratory
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
68%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
189 granted / 274 resolved
+14.0% vs TC avg
Minimal -0% lift
Without
With
+-0.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
8 currently pending
Career history
282
Total Applications
across all art units

Statute-Specific Performance

§101
25.6%
-14.4% vs TC avg
§103
30.2%
-9.8% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 274 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 11, 12, 17-21, 23 and 24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lidbury et al. (“Rethinking ME/CFS diagnostic reference intervals via machine learning, and the utility of Activin B for defining symptom severity”, referred to as Lidbury). Claim 1 Lidbury anticipates a method for developing a predictive model for diagnosis of myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) in a human (Lidbury: pages 4-6, Section 2.6) comprising: receiving immune system data for each member of a population comprising healthy humans and humans with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) (Lidbury: Page 2, “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants. The investigation directly compared the ME/CFS cases to healthy controls, but also examined the application of the weighted standing time (WST), as a measure of symptom severity, to stratify the ME/CFS cohort into mild to severe classes prior to analysis; Page 9: Marker variation and survey results were investigated after the ME/CFS cohort was stratified by WST for symptom severity (classes 1–3) and compared to healthy controls (class 0) (Table 3); Examiner’s Note: markers are obtained from participants and compared to healthy controls); extracting a set of features from the immune system data (Lidbury, Pages 2-3, section 2.2: “After the standing test, non-fasting venous blood samples were collected for routine pathology testing, in addition to a parathyroid hormone (PTH), thyroid function testing (TFT), vitamin D and serum activin B [8]. For participants who were able, 24-h urine samples were collected and the volume, sodium (Na+), potassium (K+) and creatinine 24-h excretion rates were calculated”. Also see Tables 2 and 3; Examiner’s Note: it is noted that the claim does not define what the extracted features represent), wherein the set of features to be extracted is selected based at least in part on a respective importance score of each feature of the set of features (Lidbury: Page 4: Prior to conducting the appropriate statistical analyses, all raw data collected for investigation were subject to a one-sample Kolmogorov–Smirnov (K-S) test to assess whether they fulfilled a normal distribution, with K-S results of p ≤ 0.05 indicating that the specific marker distribution was significantly different from a normal curve. Based on the K-S results (Table 2), statistical significance between two groups was estimated by a Mann–Whitney U test, and three or more groups by Kruskal–Wallis non-parametric tests. Jonckheere–Terpstra non-parametric tests were also applied where the groups were clearly ordinal. Descriptive results were presented as the median and 25th–75th interquartile range (IQR). Significance was set at p < 0.05 for the two group comparisons using the Mann–Whitney U test, and also for comparisons across more than two classes in the Kruskal–Wallis (KW) test; Page 12, section 3.3: “Figure 2 presents the results of two RFA, one with five routine pathology markers, and the other with activin B included in the same pathology model. The pathology markers represent the most effective constellation of blood or urine test results that most successfully predicted WST categories 0, 1 and 2, with an overall predictive accuracy of 62–65%. The addition of extra pathology variables either did not improve the accuracy of the model or reduced overall WST class predictive accuracy.”; Page 15: Similar to the ranking of markers for WST classes 0 versus 1 (Figures 3 and 4), the urinary creatinine excretion rate, ALP and activin B were the top-ranked predictors of all the WST classes (Figure 5), which stratifies ME/CFS severity as according to orthostatic intolerance testing performance. While the cases were correctly predicted, WST class 0 recorded an (OOB) error rate of 8.7%, while class 2 recorded a 17% error rate. However, class 1 (Moderate severity) was perfectly predicted (Figure 5), suggesting again that the marker set including activin B is best for predicting symptom severity ranging from healthy, through mild, to moderate ME/CFS. The extent of the error rate in the severe cases indicates wider variation in these ME/CFS cases. Future studies involving larger participant samples will assist in determining predictive parameters with greater accuracy; Pages 17-18: “As an extension of RFA, the panel of six predictive markers was assessed by receiver operating characteristic (ROC) curves to investigate the impact of test profile sensitivity and specificity (false negative, false positive rates). Pairwise WST classes were analysed per ROC, both for the entire dataset, and for the correctly predicted cases for each WST class (0, 1, 2). Activin B remained in the top three in terms of predictor importance, with the model producing an AUC of 0.76 for all cases and an AUC of 0.963 for models comprising only correctly predicted outcomes. The correctly predicted cases from each WST class were subsequently used to calculate new reference intervals for each of the six RFA predictors (Figure 2); Examiner’s note: the claim does not define what the importance score represents or how it is used in order to extract the features. Note that the system is trained using the best predictive markers); and training a machine learning algorithm using the set of features to classify a human as healthy or having ME/CFS (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Examiner’s Note: It is inherent that machine learning models must be trained with a training set in order to produce its results). Claim 2 Lidbury anticipates evaluating performance of the predictive model with a test set of immune system data for a population comprising healthy humans and humans with ME/CFS (Lidbury, Page 7: “With the recognition of a predictor variable pattern by RFA, associated with the WST class, the diagnostic potential of the multi-marker profile to accurately separate ME severity was examined by receiver operating characteristic (ROC) curves, supported by an area under curve (AUC) calculation. A ROC curve plots assay sensitivity (rate of true positives) against the false positive rate (100—Specificity), with AUC estimating the accuracy of separating the two classes. As this suggests, only two WST classes were compared at one time, namely classes 0 versus 1, 0 versus 2, and class 1 versus class 2”; Page 13: As well as ranking predictors, the RF algorithm allowed the prediction of case category (WST class) based on the variables entered into the model. To understand the power of correctly predicted cases as a data modelling method to refine decisions on the diagnostic acuity of marker patterns, ROC was repeated for WST classes 0 versus 1, with only correctly RFA predicted 0 or 1 cases included (Figure 4); Page 15: New reference intervals based on correctly predicted cases for each analyte of interest were calculated based on the following criteria: (1) comparison of the ME/CFS cohort with the healthy control group; (2) calculation of reference intervals following the WST criteria of categories 0 (healthy controls plus mild ME/CFS), 1 (moderate symptoms), and 2 (severe symptoms). Claim 3 Lidbury anticipates wherein performance is evaluated using sensitivity, specificity, accuracy, positive predictive value, negative predictive value, Fl score, a receiver operating characteristic (ROC) curve, or a combination thereof (Lidbury, Page 7: “With the recognition of a predictor variable pattern by RFA, associated with the WST class, the diagnostic potential of the multi-marker profile to accurately separate ME severity was examined by receiver operating characteristic (ROC) curves, supported by an area under curve (AUC) calculation. A ROC curve plots assay sensitivity (rate of true positives) against the false positive rate (100—Specificity), with AUC estimating the accuracy of separating the two classes). Claim 4 Lidbury anticipates wherein the machine learning algorithm is a random forest classifier, a support vector machine, an artificial neural network, or a combination thereof (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques. Claim 5 Lidbury anticipates receiving other data for each human in the population (Lidbury, Page 3: “Data were collected for each participant as standard practice for the CFS Discovery staff and stored electronically in the secure clinic database. Each participant/patient file contained all the questionnaire and survey data, the printed pathology results (Australian Clinical Laboratories, South Australia), the standing test (orthostatic intolerance) data, including blood pressure (BP), heart rate (HR) and associated autonomic measurements and calculations, the standing time and standing difficulty, as well as clinical notes recording patient details (age, sex, weight, height); Page 7: “The direct comparison of a range of pathology (blood, urine, serum) markers, questionnaire results and activin B are summarised in Table 2. The subset of pathology markers included were informed by exploratory data interrogation by machine learning (Figures 1 and 2), with additional serum electrolytes, platelets, neutrophils and parathormone (parathyroid hormone—PTH) also included because of clinical interest in the potential importance of these markers, as well as for the association with renal function suggested by other results”) . wherein extracting the set of features from the immune system data comprises extracting the set of features from the immune system data and the other data (Lidbury, Page 7: “The direct comparison of a range of pathology (blood, urine, serum) markers, questionnaire results and activin B are summarised in Table 2. The subset of pathology markers included were informed by exploratory data interrogation by machine learning (Figures 1 and 2), with additional serum electrolytes, platelets, neutrophils and parathormone (parathyroid hormone—PTH) also included because of clinical interest in the potential importance of these markers, as well as for the association with renal function suggested by other results”; Tables 3 and 4). wherein the other data for each patient comprises clinical symptoms, demographic information, metabolic biomarkers, microbiome biomarkers, clinical history, genetics, or a combination thereof (Lidbury, Page 3: “Data were collected for each participant as standard practice for the CFS Discovery staff and stored electronically in the secure clinic database. Each participant/patient file contained all the questionnaire and survey data, the printed pathology results (Australian Clinical Laboratories, South Australia), the standing test (orthostatic intolerance) data, including blood pressure (BP), heart rate (HR) and associated autonomic measurements and calculations, the standing time and standing difficulty, as well as clinical notes recording patient details (age, sex, weight, height); Page 7: “The direct comparison of a range of pathology (blood, urine, serum) markers, questionnaire results and activin B are summarised in Table 2. The subset of pathology markers included were informed by exploratory data interrogation by machine learning (Figures 1 and 2), with additional serum electrolytes, platelets, neutrophils and parathormone (parathyroid hormone—PTH) also included because of clinical interest in the potential importance of these markers, as well as for the association with renal function suggested by other results”). Claim 11 Lidbury anticipates a method for diagnosing myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) in a subject, comprising: receiving immune system data of a subject (Lidbury: Page 2, “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants. The investigation directly compared the ME/CFS cases to healthy controls, but also examined the application of the weighted standing time (WST), as a measure of symptom severity, to stratify the ME/CFS cohort into mild to severe classes prior to analysis”); extracting a set of features from the immune system data (Lidbury, Pages 2-3, section 2.2: “After the standing test, non-fasting venous blood samples were collected for routine pathology testing, in addition to a parathyroid hormone (PTH), thyroid function testing (TFT), vitamin D and serum activin B [8]. For participants who were able, 24-h urine samples were collected and the volume, sodium (Na+), potassium (K+) and creatinine 24-h excretion rates were calculated”. Also see Tables 2 and 3; Examiner’s Note: it is noted that the claim does not define what the extracted features represent), wherein the set of features to be extracted is selected based at least in part on a respective importance score of each feature of the set of features (Lidbury: Page 12, section 3.3: “Figure 2 presents the results of two RFA, one with five routine pathology markers, and the other with activin B included in the same pathology model. The pathology markers represent the most effective constellation of blood or urine test results that most successfully predicted WST categories 0, 1 and 2, with an overall predictive accuracy of 62–65%. The addition of extra pathology variables either did not improve the accuracy of the model or reduced overall WST class predictive accuracy.”; Page 15: Similar to the ranking of markers for WST classes 0 versus 1 (Figures 3 and 4), the urinary creatinine excretion rate, ALP and activin B were the top-ranked predictors of all the WST classes (Figure 5), which stratifies ME/CFS severity as according to orthostatic intolerance testing performance. While the cases were correctly predicted, WST class 0 recorded an (OOB) error rate of 8.7%, while class 2 recorded a 17% error rate. However, class 1 (Moderate severity) was perfectly predicted (Figure 5), suggesting again that the marker set including activin B is best for predicting symptom severity ranging from healthy, through mild, to moderate ME/CFS. The extent of the error rate in the severe cases indicates wider variation in these ME/CFS cases. Future studies involving larger participant samples will assist in determining predictive parameters with greater accuracy; Pages 17-18: “As an extension of RFA, the panel of six predictive markers was assessed by receiver operating characteristic (ROC) curves to investigate the impact of test profile sensitivity and specificity (false negative, false positive rates). Pairwise WST classes were analysed per ROC, both for the entire dataset, and for the correctly predicted cases for each WST class (0, 1, 2). Activin B remained in the top three in terms of predictor importance, with the model producing an AUC of 0.76 for all cases and an AUC of 0.963 for models comprising only correctly predicted outcomes. The correctly predicted cases from each WST class were subsequently used to calculate new reference intervals for each of the six RFA predictors (Figure 2); inputting the features to a machine trained classifier comprising a predictive model (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Page 11: “As assessed by algorithm ensembles that calculated percentage accuracy and the kappa statistic (Figure 1), Random Forest Analysis (RFA) was chosen as the machine learning method to conduct deeper analyses of the ME/CFS results”) classifying, by application of the machine trained classifier to the features, the subject as being healthy or having ME/CFS (Lidbury, Page 5: “All the RFA results presented herein used the three-class (WST) model to detect predictors of absent or mild ME/CFS symptoms (0), compared to moderate (1) or severe (2) symptoms (Table 1b); Page 10: “WST classes 0 and 1 were combined to increase sample size for subsequent machine learning (ML), resulting in adjusted WST classes representing categories defining absent or mild symptoms (0), moderate (1) or severe ME/CFS symptoms (2), as reflected by orthostatic intolerance”); and outputting the classification (Lidbury, Page 5: (Lidbury, Page 5: “All the RFA results presented herein used the three-class (WST) model to detect predictors of absent or mild ME/CFS symptoms (0), compared to moderate (1) or severe (2) symptoms (Table 1b); Page 10: “WST classes 0 and 1 were combined to increase sample size for subsequent machine learning (ML), resulting in adjusted WST classes representing categories defining absent or mild symptoms (0), moderate (1) or severe ME/CFS symptoms (2), as reflected by orthostatic intolerance”);. Claim 17 Lidbury anticipates receiving other data for the subject, wherein the other data for the subject comprises clinical symptoms, demographic information, metabolic biomarkers, microbiome biomarkers, clinical history, genetics, or a combination thereof (Lidbury, Page 3: “Data were collected for each participant as standard practice for the CFS Discovery staff and stored electronically in the secure clinic database. Each participant/patient file contained all the questionnaire and survey data, the printed pathology results (Australian Clinical Laboratories, South Australia), the standing test (orthostatic intolerance) data, including blood pressure (BP), heart rate (HR) and associated autonomic measurements and calculations, the standing time and standing difficulty, as well as clinical notes recording patient details (age, sex, weight, height); Page 7: “The direct comparison of a range of pathology (blood, urine, serum) markers, questionnaire results and activin B are summarised in Table 2. The subset of pathology markers included were informed by exploratory data interrogation by machine learning (Figures 1 and 2), with additional serum electrolytes, platelets, neutrophils and parathormone (parathyroid hormone—PTH) also included because of clinical interest in the potential importance of these markers, as well as for the association with renal function suggested by other results”). Claim 18 Lidbury anticipates wherein extracting [[a]] the set of features from the immune system data comprises extracting [[a]] the set of features from the immune system data and the other data (Lidbury, Page 7: “The direct comparison of a range of pathology (blood, urine, serum) markers, questionnaire results and activin B are summarised in Table 2. The subset of pathology markers included were informed by exploratory data interrogation by machine learning (Figures 1 and 2), with additional serum electrolytes, platelets, neutrophils and parathormone (parathyroid hormone—PTH) also included because of clinical interest in the potential importance of these markers, as well as for the association with renal function suggested by other results”; Tables 3 and 4).. Claim 19 Lidbury anticipates wherein the predictive model of the machine trained classifier has an AUC of at least 0.75 (Lidbury, Page 12: “Figure 3 presents the RFA and ROC results for the comparison of WST classes 0 and 1 (Table 1b). Figure 3a shows the Gini Index and Importance (Mean Decrease Accuracy) weighting of predictor variables to discriminate between WST classes 0 and 1 (mild symptoms and healthy cases combined versus moderate ME/CFS symptoms). The rate of urinary creatinine excretion was the top-ranked predictor, followed by serum activin B. For the total constellation of markers, the 0 versus 1 AUC was calculated at 0.755, with the ROC curve showing a clear separation from 0.50 (Figure 3b); Page 13: “. The ROC curve showed an excellent separation from the 0.50 threshold, with an AUC of 0.963, which was clearly superior to AUC 0.755 found for the general model of the same WST classes that included all cases, regardless of correct prediction (Figure 3)”). Claim 21 Lidbury anticipates a system for diagnosing myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) in a subject, comprising: a processor (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Examiner’s Note: Machine learning models are computer implemented and therefore inherently require a processor); and a memory storing computer executable instructions, which when executed by the processor cause the processor to perform operations comprising (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Examiner’s Note: Machine learning models are computer implemented and therefore inherently require memory storing computer instructions): receiving immune system data of a subject (Lidbury: Page 2, “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants. The investigation directly compared the ME/CFS cases to healthy controls, but also examined the application of the weighted standing time (WST), as a measure of symptom severity, to stratify the ME/CFS cohort into mild to severe classes prior to analysis”); extracting a set of features from the immune system data (Lidbury, Pages 2-3, section 2.2: “After the standing test, non-fasting venous blood samples were collected for routine pathology testing, in addition to a parathyroid hormone (PTH), thyroid function testing (TFT), vitamin D and serum activin B [8]. For participants who were able, 24-h urine samples were collected and the volume, sodium (Na+), potassium (K+) and creatinine 24-h excretion rates were calculated”. Also see Tables 2 and 3; Examiner’s Note: it is noted that the claim does not define what the extracted features represent), wherein the set of features to be extracted is selected based at least in part on a respective importance score of each feature of the set of features (Lidbury: Page 12, section 3.3: “Figure 2 presents the results of two RFA, one with five routine pathology markers, and the other with activin B included in the same pathology model. The pathology markers represent the most effective constellation of blood or urine test results that most successfully predicted WST categories 0, 1 and 2, with an overall predictive accuracy of 62–65%. The addition of extra pathology variables either did not improve the accuracy of the model or reduced overall WST class predictive accuracy.”; Page 15: Similar to the ranking of markers for WST classes 0 versus 1 (Figures 3 and 4), the urinary creatinine excretion rate, ALP and activin B were the top-ranked predictors of all the WST classes (Figure 5), which stratifies ME/CFS severity as according to orthostatic intolerance testing performance. While the cases were correctly predicted, WST class 0 recorded an (OOB) error rate of 8.7%, while class 2 recorded a 17% error rate. However, class 1 (Moderate severity) was perfectly predicted (Figure 5), suggesting again that the marker set including activin B is best for predicting symptom severity ranging from healthy, through mild, to moderate ME/CFS. The extent of the error rate in the severe cases indicates wider variation in these ME/CFS cases. Future studies involving larger participant samples will assist in determining predictive parameters with greater accuracy; Pages 17-18: “As an extension of RFA, the panel of six predictive markers was assessed by receiver operating characteristic (ROC) curves to investigate the impact of test profile sensitivity and specificity (false negative, false positive rates). Pairwise WST classes were analysed per ROC, both for the entire dataset, and for the correctly predicted cases for each WST class (0, 1, 2). Activin B remained in the top three in terms of predictor importance, with the model producing an AUC of 0.76 for all cases and an AUC of 0.963 for models comprising only correctly predicted outcomes. The correctly predicted cases from each WST class were subsequently used to calculate new reference intervals for each of the six RFA predictors (Figure 2) ; Examiner’s note: the claim does not define what the importance score represents or how it is used in order to extract the features. Note that the system is trained using the best predictive markers); inputting the set of features to a classifier (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Page 11: “As assessed by algorithm ensembles that calculated percentage accuracy and the kappa statistic (Figure 1), Random Forest Analysis (RFA) was chosen as the machine learning method to conduct deeper analyses of the ME/CFS results”); classifying, by application of the classifier to the set of features, the subject as being healthy or having ME/CFS( Lidbury, Page 5: (Lidbury, Page 5: “All the RFA results presented herein used the three-class (WST) model to detect predictors of absent or mild ME/CFS symptoms (0), compared to moderate (1) or severe (2) symptoms (Table 1b); Page 10: “WST classes 0 and 1 were combined to increase sample size for subsequent machine learning (ML), resulting in adjusted WST classes representing categories defining absent or mild symptoms (0), moderate (1) or severe ME/CFS symptoms (2), as reflected by orthostatic intolerance”);.; and outputting the classification (Lidbury, Page 5: (Lidbury, Page 5: “All the RFA results presented herein used the three-class (WST) model to detect predictors of absent or mild ME/CFS symptoms (0), compared to moderate (1) or severe (2) symptoms (Table 1b); Page 10: “WST classes 0 and 1 were combined to increase sample size for subsequent machine learning (ML), resulting in adjusted WST classes representing categories defining absent or mild symptoms (0), moderate (1) or severe ME/CFS symptoms (2), as reflected by orthostatic intolerance”). Claim 22 Lidbury anticipates a system for developing a predictive model for diagnosis of myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) in a human comprising: a processor Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Examiner’s Note: Machine learning models are computer implemented and therefore inherently require a processor; and a memory storing computer executable instructions, which when executed by the processor cause the processor to perform operations comprising (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Examiner’s Note: Machine learning models are computer implemented and therefore inherently require a memory storing instructions): receiving immune system data for each member of a population comprising healthy humans and humans with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) (Lidbury: Page 2, “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants. The investigation directly compared the ME/CFS cases to healthy controls, but also examined the application of the weighted standing time (WST), as a measure of symptom severity, to stratify the ME/CFS cohort into mild to severe classes prior to analysis; Page 9: Marker variation and survey results were investigated after the ME/CFS cohort was stratified by WST for symptom severity (classes 1–3) and compared to healthy controls (class 0) (Table 3); Examiner’s Note: markers are obtained from participants and compared to healthy controls); extracting a set of features from the immune system data (Lidbury, Pages 2-3, section 2.2: “After the standing test, non-fasting venous blood samples were collected for routine pathology testing, in addition to a parathyroid hormone (PTH), thyroid function testing (TFT), vitamin D and serum activin B [8]. For participants who were able, 24-h urine samples were collected and the volume, sodium (Na+), potassium (K+) and creatinine 24-h excretion rates were calculated”. Also see Tables 2 and 3; Examiner’s Note: it is noted that the claim does not define what the extracted features represent); wherein the set of features to be extracted is selected based at least in part on a respective importance score of each feature of the set of features (Lidbury: Page 12, section 3.3: “Figure 2 presents the results of two RFA, one with five routine pathology markers, and the other with activin B included in the same pathology model. The pathology markers represent the most effective constellation of blood or urine test results that most successfully predicted WST categories 0, 1 and 2, with an overall predictive accuracy of 62–65%. The addition of extra pathology variables either did not improve the accuracy of the model or reduced overall WST class predictive accuracy.”; Page 15: Similar to the ranking of markers for WST classes 0 versus 1 (Figures 3 and 4), the urinary creatinine excretion rate, ALP and activin B were the top-ranked predictors of all the WST classes (Figure 5), which stratifies ME/CFS severity as according to orthostatic intolerance testing performance. While the cases were correctly predicted, WST class 0 recorded an (OOB) error rate of 8.7%, while class 2 recorded a 17% error rate. However, class 1 (Moderate severity) was perfectly predicted (Figure 5), suggesting again that the marker set including activin B is best for predicting symptom severity ranging from healthy, through mild, to moderate ME/CFS. The extent of the error rate in the severe cases indicates wider variation in these ME/CFS cases. Future studies involving larger participant samples will assist in determining predictive parameters with greater accuracy; Pages 17-18: “As an extension of RFA, the panel of six predictive markers was assessed by receiver operating characteristic (ROC) curves to investigate the impact of test profile sensitivity and specificity (false negative, false positive rates). Pairwise WST classes were analysed per ROC, both for the entire dataset, and for the correctly predicted cases for each WST class (0, 1, 2). Activin B remained in the top three in terms of predictor importance, with the model producing an AUC of 0.76 for all cases and an AUC of 0.963 for models comprising only correctly predicted outcomes. The correctly predicted cases from each WST class were subsequently used to calculate new reference intervals for each of the six RFA predictors (Figure 2) ; Examiner’s note: the claim does not define what the importance score represents or how it is used in order to extract the features. Note that the system is trained using the best predictive markers); training a machine learning algorithm using the set of features to classify a human as healthy or having ME/CFS to obtain a predictive model (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Examiner’s Note: It is inherent that machine learning models must be trained with a training set in order to produce its results). Claim 23 Lidbury anticipates wherein the classifier is a machine-trained classifier, the machine-trained classifier trained, at least in part, from training data comprising immune system data for a population comprising healthy humans and humans with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) (Lidbury, Page 6: “A number of machine learning options are available for the training and testing of data to reveal outcome predictors. To examine the best machine learning option, ensemble analyses that compared random forest analyses (RFA) to support vector machines (SVM), gradient boosting and decision trees, were conducted with the aims of assessing the comparative predictive accuracy of various machine learning techniques; Examiner’s Note: It is inherent that machine learning models must be trained with a training set in order to produce its results). Claim 24 Lidbury anticipates wherein the immune system data received comprises measurements of immune system biomarkers in a blood sample from a member of the population (Lidbury: Page 2, “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Lidbury as set forth above in view of Cliff et. al. (“Cellular immune function in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), referred to as Cliff). Claim 6 While Lidbury teaches extracting features (Lidbury, Pages 2-3, section 2.2: “After the standing test, non-fasting venous blood samples were collected for routine pathology testing, in addition to a parathyroid hormone (PTH), thyroid function testing (TFT), vitamin D and serum activin B [8]. For participants who were able, 24-h urine samples were collected and the volume, sodium (Na+), potassium (K+) and creatinine 24-h excretion rates were calculated”. Also see Tables 2 and 3). Lidbury fails to teach wherein the extracted set of features comprises at least one of the features listed in the table below. Cliff teaches wherein the extracted set of features comprises at least one of the features listed in the table below (Cliff: Pages 4-7, sections “Leucocyte Phenotyping” and T Cell and NK Cell Function) No. Feature I %CD3+ 2 %CDS+ 3 %CD4+ 4 CD4: CDS 5 % CD4-CD8- 6 % CD4+ CD45RO+ CCR7+ 7 % CD4+ CD45RO- CCR7+ 8 % CD4+ CD45RO+ CCR7- 9 % CD4+ CD45RO- CCR7- 10 % CDS+ CD45RO+ CCR7+ 11 % CDS+ CD45RO- CCR7+ 12 % CDS+ CD45RO+ CCR7- 13 % CDS+ CD45RO- CCR7- 14 % CD45RO+ CD27+ (of DN) (d0) 15 % CD45RO- CD27- (of DN) (d0) 16 % CD45RO+ CD27- (of DN) (d0) 17 % CD45RO+ CD27- ( of CD9+ MAIT) d0 18 % MAIT (of CD4+) (d0) 19 % MAIT (of CDS+) (d0) 20 % MAIT (of DN) (d0) 21 % MAIT (of CDS+):% MAIT (ofDN) (d0) [AltContent: textbox ()]22 CD4+ total memory %IL-17+ IFNy+ (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 23 CD4+ total memory %IL-17+ IFNy- (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 24 CD4+ total memory %IL-17+ (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 25 CD4+ total memory %IFNy+ (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 26 CD4+ RO+% IL-17+ IFNy+ (of CCR6+) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 27 D4+ RO+% IL-17+ IFNy- (of CCR6+) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 28 CD4+ RO+% IL-17- IFNy+ (of CCR6+) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 29 CD4+ RO+% IL-17+ (of CCR6+) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 30 CD4+ RO+% IFNy+ (of CCR6+) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 31 % IFNy+ (of memory CD4+) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 32 CD4+ CD45RO+ CCR6+ CD161+ % IL-17+ IFNy+ (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 33 CD4+ CD45RO+ CCR6+ CD161+ % IL-17+ IFNy- (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 34 CD4+ CD45RO+ CCR6+ CD161+ % IL-17- IFNy + (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 35 CD4+ CD45RO+ CCR6+ CD161-% IL-17+ IFNy+ (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 36 CD4+ CD45RO+ CCR6+ CD161-% IL-17+ IFNy- (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 37 CD4+ CD45RO+ CCR6+ CD161-% IL-17- IFNy+ (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 38 % MAIT (ofCD4+) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 39 % MAIT (ofCD8+) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 40 % MAIT (ofDN) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 41 % MAIT (ofCD8+):% MAIT (ofDN) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 42 % IL-17+ IFNy+ (of CDS+ MAIT) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 43 % IFNy+ (of CDS+ MAIT) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 44 % IL-17+ (of CDS+ MAIT) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 45 % TNFa (of CDS+ MAIT) (dy, where y = 3 to 14, preferably 3 to 10, more preferably 5- 7, yet more preferably y = 6) 46 % MAIT (of CD4+) (d0:dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 47 % MAIT (ofCD8+) (d0:dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7,yet more preferably y = 6) 48 % MAIT (ofDN) (d0:dy, where y = 3 to 14, preferably 3 to 10, more preferably 5-7, yet more preferably y = 6) 49 % CCR6+ (of memory CD4+) (dl) 50 CD4+ total memory % IL-17 + (d1) 51 CD4+ RO+% IL-17+ IFNy+ (dl) 52 CD4+ RO+% IL-17+ IFNy- (dl) 53 CD4+ RO+% IL-17+ (dl) 54 CD4+ RO+% IFNy+ (dl) 55 CD4+ RO+% IL-17+ IFNy+ (of CCR6+) (dl) 56 CD4+ RO+% IL-17+ IFNy- (of CCR6+) (dl) 57 CD4+ RO+% IL-17+ (of CCR6+) (dl) 58 CD4+ RO+% IFNy+ (of CCR6+) (dl) 59 % IFNy+ (of memory CD4+) (dl) 60 % IFNy+ (of CDS+ MAIT) (dl) 61 % GranzymeA+ (of CDS+ MAIT) (dl) 62 % Tregs (of nai:ve CD4+) (dl) 63 % FOXP3+ (of nai:ve CD4+) (dl) 64 % Tregs (of memory CD4+) (dl) 65 % FOXP3+ (of memory CD4+) (dl) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of extracting features as taught by Lidbury with the teachings of wherein the extracted set of features comprises at least one of the features listed in the table as taught by Cliff for the purpose of determining the diagnostic potential of biomarkers (Lidbury, Page 2: “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants”). (Lidbury, Page 2: “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants”). Claim 12 Claim 12 recites the same limitations as claim 6 and is rejected on the same basis. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Lidbury as set forth above in view of Castro-Marrero et. al. (“Treatment and management of chronic fatigue syndrome/myalgic encephalomyelitis: all roads lead to Rome”, referred to as Castro-Marrero). Claim 20 While Lidbury teaches diagnosing ME/CFS, Lidbury does not teach treating a subject classified as having ME/CFS with activity management, a prescription sleep medicine, a pain relieving drug, a pain management method, an antidepressant, an anti-anxiety drug, a stress management method, or a combination thereof Castro-Marrero teaches treating a subject classified as having ME/CFS with activity management, a prescription sleep medicine, a pain relieving drug, a pain management method, an antidepressant, an anti-anxiety drug, a stress management method, or a combination thereof (Castro-Marrero, pages 347-348, “Pharmacological therapy”; pages 352-359, “Non-pharmacological approaches: counselling, behavioural &rehabilitation interventions”) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of diagnosing ME/CFS, as taught by Lidbury with the teachings of treating a subject classified as having ME/CFS with activity management, a prescription sleep medicine, a pain relieving drug, a pain management method, an antidepressant, an anti-anxiety drug, a stress management method, or a combination thereof for the purpose of providing treatment to releave the symptoms associated with ME/CFS (Castro-Marrero, page 359: “In general, CFS/ME patients who are diagnosed within the first 2 years of the appearance of symptoms respond better to treatment than those diagnosed at a later date. Treatments to relieve symptoms have to be individualized for each patient”). Claim(s) 25 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Lidbury as set forth above in view of Tomas et. al. (“Cellular immune function in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), referred to as Tomas). Claim 25 While Lidbury teaches determining biomarkers (Lidbury: Page 2, “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants”; Examiner’s Note: Activin B is a protein), Lidbury fails to particularly teach wherein the immune system biomarkers are determined by staining peripheral blood mononuclear cells (PBMCs) for intracellular proteins, cell surface proteins, or a combination thereof and detecting the stained PBMCs. Tomas teaches wherein the immune system biomarkers are determined by staining peripheral blood mononuclear cells (PBMCs) for intracellular proteins, cell surface proteins, or a combination thereof and detecting the stained PBMCs (Tomas, Page 1: “We replicated the MES protocol using neutrophils and peripheral blood mononuclear cells (PBMCs) from CFS/ME patients (10) and healthy controls (13). The protocol was then repeated in PBMCs and neutrophils from healthy controls to investigate the effect of delayed sample processing time used by the Myhill group”; Page 2: “ATP concentration in the presence of excess magnesium. The first experiment investigated the ATP concentration in neutrophils and PBMCs in the presence of excess magnesium in CFS/ME patients and healthy controls (Fig. 1A,B). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the determining of biomarkers as taught by Lidbury with the teachings of the immune system biomarkers are determined by staining peripheral blood mononuclear cells (PBMCs) for intracellular proteins, cell surface proteins, or a combination thereof and detecting the stained PBMCs as taught by Tomas for the purpose of determining the diagnostic potential of biomarkers (Lidbury, Page 2: “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants”). Claim 26 Lidbury fails to teach detecting the stained PBMCs is determined by flow cytometry. Tomas teaches wherein detecting the stained PBMCs is determined by flow cytometry (Tomas, page 8: “Flow cytometry. Cells were prepared for flow cytometry in one of two ways. The first was using a Histopaque gradient to isolate a PBMC fraction and a neutrophil fraction as described earlier and the second method used a red blood cell (RBC) lysing solution (Biolegend 420301) using the manufacturers protocol to give the white cell fraction. The white cell pellet was re-suspended in 500 µL PBS along with the PBMC and neutrophils from the Histopaque isolation, on the BD LSR II flow cytometer”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the determining of biomarkers as taught by Lidbury with the teachings of the immune system biomarkers are determined by staining peripheral blood mononuclear cells (PBMCs) for intracellular proteins, cell surface proteins, or a combination thereof and detecting the stained PBMCs as taught by Tomas for the purpose of determining the diagnostic potential of biomarkers (Lidbury, Page 2: “The report herein examines the diagnostic potential of serum activin B, both individually and in combination with other blood, serum and urine markers considered for the assessment of research participants”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Omar F Fernandez Rivas whose telephone number is (571)272-2589. The examiner can normally be reached Mon-Fri 5:30-3:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Wiley can be reached at (571) 272-4150. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Jun 23, 2022
Application Filed
May 21, 2025
Non-Final Rejection — §102, §103
Sep 03, 2025
Response Filed
Feb 20, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591773
MACHINE LEARNING MODEL VALIDATION WITH VERIFICATION DATA
2y 5m to grant Granted Mar 31, 2026
Patent 12561557
META PSEUDO-LABELS
2y 5m to grant Granted Feb 24, 2026
Patent 12549347
DATA PROTECTION FOR REMOTE ARTIFICIAL INTELLIGENCE MODELS
2y 5m to grant Granted Feb 10, 2026
Patent 12541706
SYSTEM AND METHOD FOR OUT-OF-SAMPLE REPRESENTATION LEARNING
2y 5m to grant Granted Feb 03, 2026
Patent 12505368
Variational Continuous Optimization and Applications
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
68%
With Interview (-0.5%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 274 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month