DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In light of the amendments, the claims are rejected under 35 U.S.C. 101.
In light of the amendments, the previous 35 U.S.C. 103 rejections have been withdrawn.
Notice to Applicant
In the amendment dated 10/15/2025, the following has occurred: claim 21 has been amended; claims 1-20 were canceled; and claims 22-40 have been added.
Claims 21-40 are pending.
Effective Filing Date: 06/16/2023
Response to Arguments
35 U.S.C. 101 Rejections:
Applicant argues that the claims now reflect the solution to the problem outlined in paragraphs [0105] and [0107] of the specification. Applicant further cites Ex parte Desjardins and the reminder form the Deputy Commissioner for Patents and states that the claims are now pending. Examiner however respectfully disagrees as Desjardins is directed to a specific, technical improvement of the functioning of an artificial intelligence model itself with explicit support in the specification. In the instant application, the assertion that the method of the instant application improves actions based on predictive modeling using protein values is simply an intended result that may occur based on application of this model. MPEP 2106.05(f) recites: “a claim that generically recites an effect of the judicial exception or claims every mode of accomplishing that effect, amounts to a claim that is merely adding the words "apply it" to the judicial exception. See Internet Patents Corporation v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1418 (Fed. Cir. 2015) (The recitation of maintaining the state of data in an online form without restriction on how the state is maintained and with no description of the mechanism for maintaining the state describes "the effect or result dissociated from any method by which maintaining the state is accomplished" and does not provide a meaningful limitation because it merely states that the abstract idea should be applied to achieve a desired result).”
35 U.S.C. 103 Rejections:
Applicant argued with respect to the amendments. These arguments are deemed moot in view of the withdrawn 103 rejections.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 21-29 are drawn to a method and claims 30-40 are drawn to a system, each of which is within the four statutory categories. Claims 21-40 are further directed to an abstract idea on the grounds set out in detail below. As discussed below, the claims do not include additional elements that are sufficient to amount to significantly more than the abstract idea because the additional computer elements, which are recited at a high level of generality, provide conventional computer functions that do not add meaningful limits to practicing the abstract idea (Step 1: YES).
Step 2A:
Prong One:
Claim 21 recites a computer-implemented method for predictive modeling using protein values, the method comprising:
1) receiving, by a) one or more processors, medical information for an individual, the medical information comprising measurements of a biomarker and patient data;
2) analyzing, by the a) one or more processors, for a plurality of subjects:
subject characteristics data, and
biological sample-derived data, comprising test values for a plurality of proteins for each of the plurality of subjects, wherein, for each of the plurality of subjects, the test values have been obtained at one or more time points;
3) iteratively training, by the a) one or more processors, a plurality of machine-learning models utilizing the subject characteristics data and the biological sample-derived data to obtain b) a plurality of trained machine-learning models, comprising at least one trained time-to-event machine-learning model or at least one trained event classification machine-learning model, by:
3a) iteratively transforming the subject characteristics data and the biological sample-derived data to generate model-specific training data, by utilizing at least one of at least one down sampling operation, at least one variable pruning operation based on at least one quantitative information density metric, or at least one interpolation operation, to reduce model complexity while preserving predictive accuracy;
3b) iteratively utilizing the model-specific training data to train each corresponding machine-learning model from the plurality of machine-learning models, where each corresponding machine-learning model is configured with a distinct architecture optimized for predicting at least one corresponding future event within at least one corresponding time window based on a corresponding set of test values for a corresponding set of proteins, each obtained at one or more corresponding time points, wherein each machine-learning model is trained on a different combination of proteins from at least two distinct biological process categories;
4) receiving, by the a) one or more processors, for a particular subject, a particular subject characteristics data and a particular biological sample-derived data, comprising a particular set of test values for at least one subset of particular proteins from the plurality of proteins, wherein the particular set of test values has been obtained at one or more particular time points;
5) for each b) trained machine-learning model from the plurality of trained machine-learning models, determining, by the one or more processors, a quantitative concordance score between the particular set of test values for the at least one subset of particular proteins and a set of proteins used to train each corresponding machine-learning model;
6) identifying, by the a) one or more processors, c) a particular machine-learning trained model from the plurality of trained machine-learning models, wherein the c) particular machine-learning trained model is selected based on a highest quantitative concordance score for a specified prediction timeframe, resulting in d) an identified trained machine-learning model that is most optimal for the particular subject to predict an occurrence of at least one particular time-based event within at least one particular future time window based on the particular set of test values for the at least one subset of particular proteins;
7) generating, by the a) one or more processors, at least one particular time-based event prediction for the particular subject by inputting at least portion of the particular subject characteristics data and the particular biological sample-derived data into the d) identified trained machine-learning model, wherein the at least one particular time-based event prediction comprises a probability score of the at least one particular time-based event occurring within the at least one particular future time window; and
8) outputting, by the a) one or more processors, the at least one particular time-based event prediction; and
9) instructing, by the a) one or more processors, at least one particular event-related action based on the at least one particular time-based event prediction.
Claim 21 recites, in part, performing the steps of 1) receiving medical information for an individual, the medical information comprising measurements of a biomarker and patient data, 2) analyzing for a plurality of subjects: subject characteristics data, and biological sample-derived data, comprising test values for a plurality of proteins for each of the plurality of subjects, wherein, for each of the plurality of subjects, the test values have been obtained at one or more time points, 4) receiving for a particular subject, a particular subject characteristics data and a particular biological sample-derived data, comprising a particular set of test values for at least one subset of particular proteins from the plurality of proteins, wherein the particular set of test values has been obtained at one or more particular time points, 5) for each model from the plurality of models, determining a quantitative concordance score between the particular set of test values for the at least one subset of particular proteins and a set of proteins, 6) identifying a particular model from the plurality of models, wherein the particular model is selected based on a highest quantitative concordance score for a specified prediction timeframe, resulting in an identified model that is most optimal for the particular subject to predict an occurrence of at least one particular time-based event within at least one particular future time window based on the particular set of test values for the at least one subset of particular proteins, 7) generating at least one particular time-based event prediction for the particular subject by inputting at least portion of the particular subject characteristics data and the particular biological sample-derived data into the identified model, wherein the at least one particular time-based event prediction comprises a probability score of the at least one particular time-based event occurring within the at least one particular future time window, and 8) outputting the at least one particular time-based event prediction, and 9) instructing at least one particular event-related action based on the at least one particular time-based event prediction. These steps correspond to Certain Methods of Organizing Human Activity, more particularly, managing personal behavior or relationships or interactions between people (including following rules or instructions). For example, the claim describes a process how one could analyze a patient’s information in order to determine a prediction for the patient.
Claim 21 also recites, in part, performing the steps of 3) iteratively training a plurality of machine-learning models utilizing the subject characteristics data and the biological sample-derived data to obtain b) a plurality of trained machine-learning models, comprising at least one trained time-to-event machine-learning model or at least one trained event classification machine-learning model, by: 3a) iteratively transforming the subject characteristics data and the biological sample-derived data to generate model-specific training data, by utilizing at least one of at least one down sampling operation, at least one variable pruning operation based on at least one quantitative information density metric, or at least one interpolation operation, to reduce model complexity while preserving predictive accuracy, 3b) iteratively utilizing the model-specific training data to train each corresponding machine-learning model from the plurality of machine-learning models, where each corresponding machine-learning model is configured with a distinct architecture optimized for predicting at least one corresponding future event within at least one corresponding time window based on a corresponding set of test values for a corresponding set of proteins, each obtained at one or more corresponding time points, wherein each machine-learning model is trained on a different combination of proteins from at least two distinct biological process categories, 5) for each b) trained machine-learning model from the plurality of trained machine-learning models, determining a quantitative concordance score between the particular set of test values for the at least one subset of particular proteins and a set of proteins used to train each corresponding machine-learning model, and 6) identifying c) a particular machine-learning trained model from the plurality of trained machine-learning models, wherein the c) particular machine-learning trained model is selected based on a highest quantitative concordance score for a specified prediction timeframe, resulting in d) an identified trained machine-learning model that is most optimal for the particular subject to predict an occurrence of at least one particular time-based event within at least one particular future time window based on the particular set of test values for the at least one subset of particular proteins. These steps correspond to Mathematical Concepts,
Going forward, the abstract ideas above will be considered as a singular concept for further analysis. Independent claim 30 is similar and this categorization of the abstract idea for claim 21 applies to claim 30.
Depending claims 22-29 and 31-40 include all of the limitations of claims 21 and 30, and therefore likewise incorporate the above described abstract idea. Depending claims 24 and 33 add the additional step of “wherein the at least one variable pruning operation comprises removing at least one variable with the at least one quantitative information density metric below at least one predetermined threshold”; claims 25 and 34 add the additional step of “wherein the at least one interpolation operation comprises imputing missing protein values using at least one imputation technique”; claims 27 and 36 add the additional step of “wherein the at least one particular event-related action comprises generating at least one of a health characteristic, a notification, a report, or a recommendation, based on the at least one particular time-based event prediction”; claims 29 and 38 add the additional step of “further comprising periodically retraining at least one of the plurality of machine-learning models as new subject outcome data becomes available”; claim 39 adds the additional step of “configured to display the at least one particular time-based event prediction and a result of the at least one particular event- related action”; and claim 40 adds the additional step of “the system is configured to select different combinations of proteins from at least two distinct biological process categories for training each machine-learning model”. Additionally, the limitations of depending claims 22-23, 26, 28, 31-32, 35, and 37 further specify elements from the claims from which they depend on without adding any additional steps. These additional limitations only further serve to limit the abstract idea. Thus, depending claims 22-29 and 31-40 are nonetheless directed towards fundamentally the same abstract idea as independent claims 21 and 30 (Step 2A (Prong One): YES).
Prong Two:
This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of – using a) one or more processors, b) a plurality of trained machine-learning models, c) a particular machine-learning trained model, d) an identified trained machine-learning model, e) a computer memory storing instructions (from claim 30), and f) a user interface (from claim 39) to perform the claimed steps.
The a) one or more processors, b) plurality of trained machine-learning models, c) particular machine-learning trained model, d) identified trained machine-learning model, e) computer memory storing instructions, and f) user interface in these steps are recited at a high-level of generality (i.e., as generic components performing generic computer functions) such that they amount to no more than mere instructions to apply the exception using generic computer components (see: Applicant’s specification for a lack of description of anything but what may be generic computing components for these elements of the claim, see MPEP 2106.05(f)).
Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application.
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea (Step 2A (Prong Two): NO).
Step 2B:
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using a) one or more processors, b) a plurality of trained machine-learning models, c) a particular machine-learning trained model, d) an identified trained machine-learning model, e) a computer memory storing instructions, and f) a user interface to perform the claimed steps amounts to no more than mere instructions to apply the exception using generic computer components that do not offer “significantly more” than the abstract idea itself because the claims do not recite an improvement to another technology or technical field, an improvement to the functioning of any computer itself, or provide meaningful limitations beyond generally linking an abstract idea to a particular technological environment. It should be noted that the claims do not include additional elements that amount to significantly more than the judicial exception because the Specification recites mere generic computer components, as discussed above that are being used to apply certain mathematical concepts and certain method steps of organizing human activity. Specifically, MPEP 2106.05(f) recites that the following limitations are not significantly more:
Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)).
The current invention generates a prediction utilizing a) one or more processors, b) a plurality of trained machine-learning models, c) a particular machine-learning trained model, d) an identified trained machine-learning model, e) a computer memory storing instructions, and f) a user interface, thus these computing components add the words “apply it” with mere instructions to implement the abstract idea on a computer.
Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. The claims are not patent eligible (Step 2B: NO).
Claims 21-40 are therefore rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Art Rejections
No art rejections were given as no reference, alone or in combination, could be used to reasonably reject these claims.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Steven G.S. Sanghera whose telephone number is (571)272-6873. The examiner can normally be reached M-F 7:30-5:00 (alternating Fri).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant can be reached at 571-270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEVEN G.S. SANGHERA/Primary Examiner, Art Unit 3684