DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicant
2. This communication is in response to the communication filed 9/8/2025. Claim 19 is cancelled. Claims 1, 3-4, 6-8, 10-13, 17 and 20 are currently amended. Claims 1-18 and 20 are currently pending.
Claim Rejections - 35 USC § 101
3. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
3.1. Claims 1-18 and 20 are rejected under 35 U.S.C. § 101 because while the claims (1) are to a statutory category (i.e., process, machine, manufacture or composition of matter, the claims (2A1) recite an abstract idea (i.e., a law of nature, a natural phenomenon); (2A2) do not recite additional elements that integrate the abstract idea into a practical application; and (2B) are not directed to significantly more than the abstract idea itself.
In regards to (1), the claims are to a statutory category (i.e., statutory categories including a process, machine, manufacture or composition of matter). In particular, independent claims 1, 11 and 20, and their respective dependent claims are directed, in part, to methods and an apparatus for detecting a health abnormality in a liquid biopsy sample.
In regards to (2A1), the claims, as a whole, recite and are directed to an abstract idea because the claims include one or more limitations that correspond to an abstract idea including mental processes and/or certain methods of organizing human activity which encompasses both certain activity of a single person, certain activity that involves multiple people, and certain activity between a person and a computer. For example, independent claims 1, 11 and 20, as a whole, are directed to detecting a health abnormality in a liquid biopsy sample by, in part, receiving data, identifying training data sets, compiling training data, identifying relevant features within data sets, and optimizing training data which are human activities and/or interactions and therefore, certain methods of organizing human activity which encompasses both certain activity of a single person, certain activity that involves multiple people, and certain activity between a person and a computer. The dependent claims include all of the limitations of their respective independent claims and thus are directed to the same abstract idea identified for the independent claims but further describe the elements and/or recite field of use limitations.
Furthermore, assuming arguendo, the claims are not directed to certain methods of organizing human activities, the claims, nevertheless, are directed to an abstract idea because the claims, except for certain limitations (* identified below in bold), under the broadest reasonable interpretation, can be reasonably and practically performed in the human mind and/or with pen and paper using observation, evaluation, judgment and/or opinion. That is, other than reciting the certain additional elements, nothing in the claims precludes the limitations from being practically performed in the mind and/or with pen and paper.
CLAIM 1:
A computer implemented method for establishing training data for a machine learning classifier model for use in detecting a health abnormality in a liquid biopsy sample, the computer implemented method comprising:
receiving a plurality of data sets, each data set comprising a plurality of features associated with a respective patient;
identifying all m training data sets of the plurality of data sets associated with a positive detection of the health abnormality;
performing kernel density estimation on the identified m training data;
creating first synthetic data sets consisting of p samples drawn at random from a positive health abnormality kernel density model;
identifying all n training data sets of the plurality of data sets associated with an absence of the health abnormality;
performing kernel density estimation on the identified n training data sets;
creating second synthetic datasets consisting of q samples drawn at random from an absent health abnormality kernel density model;
compiling training data comprising the first and second synthetic data sets;
identifying relevant features within respective first and second synthetic data sets of the training data, wherein a relevant feature or combination of such features provides a level of likelihood above a threshold of a positive indication of the health abnormality; and
optimizing the training data via a removal of nonrelevant features.
CLAIM 2:
The computer implemented method of claim 1, wherein the liquid biopsy comprises a sample of blood, urine, fecal matter, breath, or sputum.
CLAIM 3:
The computer implemented method of claim 1, wherein the received data set comprises any one or more of DNA, epidemiology based data, proteomics, epigenetics, volatile organic molecules, metabolomics or microbiome based data.
CLAIM 4:
The computer implemented method of claim 1, wherein the relevant features comprise biological features in a form of one or more free molecules, exosomes, apoptotic bodies found in the liquid biopsy or cells found in the liquid biopsy.
CLAIM 5:
The computer implemented method of claim 1, further comprising: reformatting and normalizing the received data set.
CLAIM 6:
The computer implemented method of claim 1, wherein identifying relevant features further comprises performing a linear dimensionality reduction or non-linear dimensionality reduction techniques.
CLAIM 7:
The computer implemented method of claim 1, wherein identifying relevant features further comprises identifying relevant combinations of features via inputting the compiled training data into a variety of classifier types to identify non-linear feature interactions.
CLAIM 8:
The computer implemented method of claim 1, wherein identifying relevant features further comprises:
compiling randomized subsets of features, wherein the subsets comprise x features;
inputting the randomized subsets into a plurality of different classifier models;
selecting:
feature subsets which enable a classifier of the plurality of different classifier models to yield a minimum predetermined metric in a validation, or
a proportion of top-performing classifier feature subsets; and
for different z possible combinations of features which occur in at least a specified proportion of selected feature subsets;
assigning a level of importance to each of the different z possible combinations based on an average decrease in classifier performance across all feature subsets comprising a particular feature combination,
wherein features of the particular feature combination are rendered non-informative by:
deleting relevant feature subsets of all features of the particular feature combination, or
permuting values of the features in relation to a response variable.
CLAIM 9:
The computer implemented method of claim 8, wherein the plurality of different classifier models comprise a learning classifier system.
CLAIM 10:
The computer implemented method of claim 1, wherein identifying relevant features further comprises:
inputting the compiled training data into a plurality of different learning classifier systems; and
assigning a level of importance to each feature combination which occurs in at least a specified proportion of selected feature subsets based on an average decrease in classifier performance across all feature subsets containing a particular feature combination,
wherein the features of the particular feature combination are rendered non-informative by:
deleting relevant feature subsets of all features of the particular feature combination, or
permuting values of the features in relation to a response variable.
CLAIM 11
A computer implemented method for selecting a machine learning classifier model for use in detecting a health abnormality in a liquid biopsy sample, the computer implemented method comprising:
receiving a plurality of data sets, each data set comprising a plurality of features associated with a respective patient;
identifying all m training data sets of the plurality of data sets associated with a positive detection of the health abnormality, and
performing kernel density estimation on the identified m training data sets;
creating first synthetic data sets consisting of p samples drawn at random from a positive health abnormality kernel density model;
identifying all n training data sets of the plurality of data sets associated with an absence of the health abnormality;
performing kernel density estimation on the identified n training data;
creating second synthetic datasets consisting of q samples drawn at random from an absent health abnormality kernel density model:
compiling training data comprising the first and second synthetic data sets;
identifying relevant features within respective first and second synthetic data sets of the training data, wherein a relevant feature or combination of such features provides a level of likelihood above a threshold of a positive indication of the health abnormality;
optimizing the training data via a removal of nonrelevant features; and
training a plurality of different machine learning classifier models using the optimized training data.
CLAIM 12:
The computer implemented method of claim 11, further comprising:
compiling a validation data set, wherein the validation data set comprises the identified relevant features, and wherein the validation data set is not equivalent to any data set comprised in the optimized training data set;
assessing a performance of the trained plurality of different machine learning classifier models on the validation data set; and
selecting a machine learning classifier, from the plurality of different machine learning classifier models, wherein the selected machine learning classifier yields a percentage above a threshold of correctly detected health abnormalities.
CLAIM 13:
The computer implemented method of claim 11, further comprising:
compiling k-folds for k-fold validation;
assessing an average performance on validation folds in k-fold cross-validation; and
selecting a machine learning classifier, from the plurality of different machine learning classifier models, wherein the selected machine learning classifier yields a percentage above a threshold of correctly detected health abnormalities.
CLAIM 14:
The computer implemented method of claim 12, further comprising:
assessing a performance of the selected machine learning classifier on the validation data set via a receiver operating characteristic curve; and
optimizing parameters of the selected machine learning classifier to obtain predetermined sensitivity and selectivity ratios on the receiver operating characteristic curve.
CLAIM 15:
The computer implemented method of claim 11, comprising:
receiving a test data set comprising the identified relevant features, wherein the test data set is not equivalent to any data set comprised in the optimized training data set, and wherein the test data set comprises data corresponding to at least one liquid biopsy sample;
assessing a performance of the selected machine learning classifier on the test data set; and
receiving an output of the selected machine learning classifier, wherein the output indicates a presence of the health abnormality in the liquid biopsy sample corresponding to the test data set.
CLAIM 16:
The computer implemented method of claim 15, wherein the output is a probability or vote corresponding to a presence versus an absence of the health abnormality.
CLAIM 17:
The computer implemented method of claim 1, wherein the classifier model is a one or more of a support vector machine, neural network, decision tree, random forest, boosted tree, logistic regression, lasso, k- nearest neighbor, or naive byes.
CLAIM 18:
The computer implemented method of claim 1, wherein the classifier model is a Michigan-style supervised learning classifier system or a Pittsburgh-style supervised learning system.
CLAIM 20:
An analyzing unit comprising:
an input/output unit;
a memory; and
processing circuitry,
wherein the analyzing unit is configured to perform operations comprising:
receiving a plurality of data sets, each data set comprising a plurality of features associated with a respective patient;
identifying all m training data sets of the plurality of data sets associated with a positive detection of a health abnormality;
performing kernel density estimation on the identified m training data sets;
creating first synthetic data sets consisting of p samples drawn at random from a positive health abnormality kernel density model;
identifying all n training data sets of the plurality of data sets associated with an absence of the health abnormality;
performing kernel density estimation on the identified n training data set;
creating second synthetic datasets consisting of q samples drawn at random from an absent health abnormality kernel density model;
compiling training data comprising the first and second synthetic data sets;
identifying relevant features within respective first and second synthetic data sets of the training data, wherein a relevant feature or combination of such features provides a level of likelihood above a threshold, of a positive indication of the health abnormality; and
optimizing the training data via a removal of nonrelevant features.
* The limitations that are in bold are considered “additional elements” that are further analyzed below in subsequent steps of the 101 analysis. The limitations that are not in bold are abstract and/or can be reasonably and practically performed in the human mind and/or with pen paper.
In regards to (2A2), the claims do not recite additional elements that integrate the abstract idea into a practical application. The additional elements in the claims (i.e., * identified above in bold) do not integrate the abstract idea into a practical application because the additional elements merely add insignificant extra-solution activity to the abstract idea; merely link the use of the judicial exception to a particular technological environment or field of use; and/or simply append technologies and functions, specified at a high level of generality, to the abstract idea (i.e., the additional elements do not amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer).
Here, the additional elements (e.g., computer, machine learning classifier model, a positive health abnormality kernel density model, an absent health abnormality kernel density model , classifier, classifier model, a learning classifier system , analyzing unit, input/output unit, memory, processing circuitry, etc.) are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea using generic computer technologies. Moreover, the claims recite “configured to perform”, etc. devoid of any meaningful technological improvement details and thus, further evidence the additional elements are merely being used to leverage generic technologies to automate what otherwise could be done manually. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Furthermore, the additional elements do not recite improvements to the functioning of a computer, or to any other technology or technical field—the additional elements merely recite general purpose computer technology; the additional elements do not recite applying or using a judicial exception to effect a particular treatment or prophylaxis for disease or medical condition—there is no actual administration of a particular treatment; the additional elements do not recite applying the judicial exception with, or by use of, a particular machine—the additional elements merely recite general purpose computer technology; the additional elements do not recite limitations effecting a transformation or reduction of a particular article to a different state or thing—the additional elements do not recite transformation such as a rubber mold process; the additional elements do not recite applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment—the additional elements merely leverage general purpose computer technology to link the abstract idea to a technological environment.
In regards to (2B), the claims, individually, as a whole and in combination with one another, do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of (A) a generic computer structure(s) that serves to perform computer functions that serve to merely link the abstract idea to a particular technological environment (i.e., computers); and/or (B) functions that are well-understood, routine, and conventional activities previously known to the pertinent industry.
Here, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using generic computer technologies. Mere instructions to apply an exception using generic computer technologies cannot provide an inventive concept.
Moreover, paragraphs [0043]-[0044] of applicant's specification (US 2024/0387047) recites that the system/method is implemented using processing circuitry which may be any suitable type of computation unit, for example, a microprocessor, digital signal processor (DSP), field programmable gate array (FPGA), or application specific integrated circuitry (ASIC), or any other form of circuitry, which are well-known general purpose or generic-type computers and/or technologies. The use of generic computer components recited at a high level of generality to process information through an unspecified processor/computer does not impose any meaningful limit on the computer implementation of the abstract idea. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Furthermore, the additional elements are merely well-known general purpose computers, components and/or technologies that receive, transmit, store, display, generate and otherwise process information which are akin to functions that courts consider well-understood, routine, and conventional activities previously known to the pertinent industry, such as, performing repetitive calculations; receiving or transmitting data over a network; electronic recordkeeping; retrieving and storing information in memory; and sorting information (See, for example, MPEP § 2106).
Therefore, the claims are not patent-eligible under 35 U.S.C. § 101.
Response to Arguments
4. Applicant's arguments filed 9/8/2025 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed hereinbelow in the order in which they appear in the response filed 9/8/2025.
4.1. Applicant argues, on pages 11-24 of the response, that the claims are patent-eligible in view of the guidance set forth in: the Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101 (Aug 4, 2025) (“August 4 Memo”), the 2024 Guidance Update on Patent Subject matter Eligibility including on Artificial Intelligence, 89 Fed. Reg. 58128-58138 (Jul. 17, 2024) (“2024 PEG”), and M.P.E.P. § 2106. In particular, Applicant argues that (A) the claims are not directed to an abstract idea because the claims do not recite a judicial exception under Step 2A Prong One; (B) the claims integrate a practical application under Step 2A, Prong Two; and (C) under Step 2B, the claims recite “Significantly More”.
In regard to (A), it is noted that the claims are directed to “detecting a health abnormality in a liquid biopsy sample.” It is submitted that medical screening and diagnostic testing are procedures performed by healthcare practitioners, medical scientists, and the like, and therefore human activities. As such, it is further submitted that the human activities properly fall under the category of certain methods of organizing human activity which encompasses both certain activity of a single person, certain activity that involves multiple people, and certain activity between a person and a computer. Assuming arguendo that the claim cannot be properly categorized under certain methods of organizing human activity, the claims nonetheless are abstract because the claims can be reasonably and practically performed in the human mind and/or with pen and paper. For example, nothing precludes a human from receiving data sets, identifying data sets associated with positive detection of a health abnormality, performing probability estimations, creating data sets, identifying training data sets, performing probability estimations on the training data, creating second data sets, compiling training data, identifying relevant features within the data sets, optimizing the training data by removing nonrelevant features, etc.
In regard to Example 39 of the USPTO’s Subject Matter Eligibility Examples: Abstract Ideas, it is submitted that the applicant’s claims are not analogous to the claims of Example 39. For example, Example 39’s claims are directed to a method of training a neural network for facial detection by collecting digital images, applying transformations to each digital facial image, creating a training set of digital facial images, training a neural network using the creating training set, creating a second training set including incorrectly detected facial images after using the first training set, and training the neural network using the second training set which thereby technologically improves the neural network’s facial detection ability. In contrast, applicant’s claims do not recite, inter alia, transforming digital data/images or technologically improving a neural network—applicant’s claims are directed to merely to optimizing training data. As such, it is submitted that applicant’s claims are more similar to patent-ineligible claim 2 of Example 47 which is directed to a method of using an artificial neural network (ANN). More particularly, claim 2 recites receiving training data, discretizing the training data, training the ANN with the data, detecting anomalies in the data using the ANN, analyzing the detected anomalies to merely generate and output anomaly data. Similarly, applicant’s claims also merely generate and output data and thus, are likewise patent-ineligible.
In regard to (B) and (C), it is reiterated that the claims do not recite additional elements that integrate the abstract idea into a practical application or amount to significantly more than the abstract idea because the additional elements are specified at a high level of generality and do not amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer). Mere instructions to apply an exception using generic computer technologies cannot provide an inventive concept. Moreover, the claims are not directed to a technological improvement, as discussed in section 3.1., supra. In other words, the focus of applicant’s claims is not on an improvement in computers as tools, but on certain abstract ideas that use computers as tools.
As such, it is respectfully submitted that the claims are directed to an abstract idea, the abstract idea is not integrated into a practical application, and the additional elements do not amount to significantly more than the abstract idea itself; and therefore, the claims are not patent-eligible subject matter under 35 U.S.C. § 101.
Conclusion
5. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Tomaszewski whose telephone number is (313)446-4863. The examiner can normally be reached M-F 5:30 am - 2:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter H Choi can be reached at (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL TOMASZEWSKI/Primary Examiner, Art Unit 3681