Prosecution Insights
Last updated: April 19, 2026
Application No. 18/558,925

DIGITAL MEASUREMENT STACKS FOR CHARACTERIZING DISEASES, MEASURING INTERVENTIONS, OR DETERMINING OUTCOMES

Final Rejection §101§103
Filed
Nov 03, 2023
Examiner
KANAAN, LIZA TONY
Art Unit
3683
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Janssen Pharmaceutica NV
OA Round
2 (Final)
23%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
58%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
26 granted / 115 resolved
-29.4% vs TC avg
Strong +35% interview lift
Without
With
+35.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
51 currently pending
Career history
166
Total Applications
across all art units

Statute-Specific Performance

§101
39.7%
-0.3% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 115 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Arguments In the amendment dated 10/16/2025, the following occurred: Claims 1, 3-17 were amended. Claim 2 was canceled. Claims 1, 3-18 are currently pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1 and 18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a system and a method for digital measuring stacks for characterizing diseases, measuring interventions or determining outcomes. Regarding claims 1 and 18, the limitation of claim 1- obtaining a measurement of interest from the subject, wherein the measurement of interest has been captured according to a measurement method; selecting a digital measurement solution from a plurality of digital measurement solutions, wherein the plurality of digital measurement solutions are of a common class that is represented by a target solution profile, wherein (i) specifies the measurement method that has been used to capture the measurement of interest, and (ii) the measurement method specifies for capturing raw data, and wherein the target solution profile represents a generalization of the plurality of digital measurement solutions; and applying the selected digital measurement solution to the obtained measurement of interest to characterize the disease for the subject, wherein the digital measurement solution comprises: a measurement definition defining one or more concepts of interest relevant to the disease and an evidence asset for performing one or more validations on a dataset generated by the instrumentation asset, and wherein applying the selected digital measurement solution comprises applying to the measurement of interest to transform the raw data into a dataset that is informative for characterizing the disease and regarding claim 18- the limitation of generating a measurement definition of a target solution profile, the measurement definition defining one or more concepts of interest relevant to the disease; generating or selecting for the target solution profile, configured to transform data captured according to the measurement definition to a dataset, and is thereby interchangeable across different target solution profiles; generating an evidence asset of the target solution profile for performing one or more validations on the dataset; generating a digital measurement solution by at least specifying of target solution profile, wherein the digital measurement solution is of a common class that is represented by the target solution profile, wherein the target solution profile is unchanged over time and thereby enables efficient life- cycle management of the plurality of digital measurement solutions as Crafted, is are processes that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. That is other than reciting one or more processors in claim 1 and a method in claim 18, the claimed invention amounts to managing personal behavior or interaction between people (i.e., rules or instructions). For example, but for the one or more processors, the claims encompass digital measuring stacks for characterizing diseases, measuring interventions or determining outcomes in the manner described in the identified abstract idea, supra. The Examiner notes that certain “method[s] of organizing human activity” includes a person’s interaction with a computer (see MPEP 2106.04(a)(2)(II)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. Claim 18 is not tied to any particular technological environment that implements the identified abstract idea. Claim 1 recites the additional element of one or more processors. The one or more processors is not described by the applicant and is recited at a high-level of generality (i.e., a generic computer processor for performing generic computer functions, see Spec. Para. 00171-00174 and 00179) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Claims 1 and 18 further recite the additional element of an instrumentation asset and a particular device. These additional element are recited at a high level of generality (i.e. a general means to output/receive/transmit/measure data) and amount to extra solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, claim 18 is not tied to any particular technological environment that implements the identified abstract idea. Moreover, the additional element of using one or more processors to perform the noted steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (“significantly more”). Also as discussed above with respect to integration of the abstract idea into a practical application, the additional element of an instrumentation asset and a particular device were considered extra-solution activity. This has been re-evaluated under “significantly more” analysis and determined to be well-understood, routine and conventional activity in the field. MPEP 2016.05(d)(II) indicates that receiving and/or transmitting data over a network has been held by the courts to be well-understood, routine and conventional activity (citing Symantec, TLI Communications, OIP Techs., and buySAFE). Well-understood, routine and conventional activity cannot provide an inventive concept (“significantly more”). Therefore when considering the additional elements alone, and in combination, there is no inventive concept in the claim, and thus the claim is not patent eligible. Claims 3-17 are similarly rejected because they either further define/narrow the abstract idea and/or do not further limit the claim to a practical application or provide as inventive concept such that the claims are subject matter eligible even when considered individually or as an ordered combination. Claim(s) 3 further merely describe(s) performing the one or more validations. Claim(s) 4 further merely describe(s) the technical validation. Claim(s) 5 further merely describe(s) the analytical validation. Claim(s) 6 further merely describe(s) performing the clinical validation. Claim(s) 7 further merely describe(s) the digital measurement solution. Claim(s) 8 further merely describe(s) a qualification protocol. Claim(s) 9 and 10 further merely describe(s) validating the dataset. Claim(s) 11 further merely describe(s) the metadata. Claim(s) 12 and 13 further merely describe(s) the specification of the digital measuring solution. Claim(s) 13 also include the additional element of “a newly released device” which is interpreted as the device above and does not provide practical application or significantly more. Claim(s) 14 further merely describe(s) the upgraded capability. Claim(s) 15 further merely describe(s) the common class of the plurality of digital measurement solutions. Claim(s) 16 further merely describe(s) the common method of measuring activity. Claim(s) 17 further merely describe(s) the instrumentation asset. Claim(s) 17 also include the additional element of “machine learning algorithm” which is interpreted as (“apply it’) the abstract idea. MPEP 2106.04(d)(I) indicates that merely saying “apply it” or equivalent to the abstract idea cannot provide a practical application and MPEP2106.05(1)(A) indicates that merely saying “apply it’ or equivalent to the abstract idea cannot provide an inventive concept (“significantly more’). As such the claim is not patent eligible. As can be seen, claims 3-17 further define the abstract idea and are rejected for the same reason presented above with respect to claims 1 and 18. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-18 are rejected under 35 U.S.C. 103 as being unpatentable over De Vries (WO 2021/222802 A1) and in further view of Monaghan (WO 2021/207543 A1). REGARDING CLAIM 1 De Vries discloses a system for characterizing a disease of a subject through measurement stacks of multiple digital components including a digital measurement solution, the system comprising one or more processors configured to execute instructions causing the one or more processors to perform steps comprising ([008] teaches the use of at least one data processor to perform operations described): obtaining a measurement of interest from the subject, wherein the measurement of interest has been captured according to a measurement method ([0020] teaches a wearable device configured to measure health parameters and/or biomarkers of the patient (interpreted by examiner as measurement of interest from the subject) and to transmit data characterizing the measured health parameters and/or biomarkers to the platform (interpreted by examiner as means to obtain the measurement according to a measurement method)); that comprises an instrumentation asset that is device technology agnostic, wherein (i) the instrumentation asset of the selected digital measurement solution specifies the measurement method that has been used to capture the measurement of interest, and (ii) the measurement method specifies a particular device for capturing raw data (De Vries at [0020] teaches a wearable device configured to measure health parameters and/or biomarkers of the patient and to transmit data characterizing the measured health parameters and/or biomarkers to the platform, and a portable medical device configured to measure the health parameters and/or the biomarkers of the patient (interpreted by examiner as an instrumentation asset that is device technology agnostic, wherein (i) the instrumentation asset of the selected digital measurement solution specifies the measurement method that has been used to capture the measurement of interest, and (ii) the measurement method specifies a particular device for capturing raw data)), wherein the digital measurement solution comprises: a measurement definition defining one or more concepts of interest relevant to the disease ([0030] also teaches characterizing a disease profile of the patient and [0060] teaches the machine learning models (interpreted by examiner as the machine learning of Monaghan which is interpreted as the digital measurement solution) identify other data predictors of treatment success (in this case defined as diabetes control via measurement of hemoglobin Ale), such as another co-morbid condition or specific demographics such as age range, this comorbid condition and age range can be added to the logic of the recommendation rules engine to incorporate parameters beyond those considered in treatment guidelines (interpreted by examiner as a measurement definition defining one or more concepts of interest relevant to the disease). [0066] also teaches the analysis planner can include an analysis plan loader that includes a list of analyses to be executed, execution schedule, cohort definition and parameters for each analysis.); and wherein applying the selected digital measurement solution comprises applying the instrumentation asset to the measurement of interest to transform raw data into a dataset that is informative for characterizing the disease ([006] teaches providing a treatment recommendation and [0028] teaches a health outcome evaluation that can include metrics that can be determined based on the received healthcare information data. The determined metrics can include disease profile (e.g., diagnoses, duration, utilization by disease) (interpreted by examiner as means to transform raw data into a dataset that is informative for characterizing the disease)), De Vries does not explicitly disclose, however Monaghan discloses: selecting a digital measurement solution from a plurality of digital measurement solutions, wherein the plurality of digital measurement solutions are of a common class that is represented by a target solution profile and wherein the target solution profile represents a generalization of the plurality of digital measurement solutions; and applying the selected digital measurement solution to the obtained measurement of interest to characterize the disease for the subject (Monaghan at [0094] teaches prediction system may select a particular disease prediction ML model from the plurality of trained disease prediction ML models to use for the patient (interpreted by examiner as selecting a digital measurement solution from a plurality of digital measurement solutions). For instance, a particular medical facility may be associated with a first disease prediction ML model and the medical facility may provide treatment data for a particular patient to the prediction system. In such instances, the prediction system may use the first disease prediction ML model to determine whether the particular patient is positive or negative for a disease such as COVID (interpreted by examiner as applying the selected digital measurement solution to the obtained measurement of interest to characterize the disease for the subject)), and an evidence asset for performing one or more validations on a dataset generated by the instrumentation asset ([0084] teaches validating datasets), It would have been prima facie obvious to one of ordinary skill in the art at the time of the invention was made to combine the noted features of De Vries with teaching of Monaghan since known work in one field of endeavor may prompt variations in design in either the same field or a different field based on design incentives or other market forces if the variations would have been predictable to one of ordinary skill in the art (KSR rationale F). One of ordinary skill in the art of healthcare data processing would have found it obvious to update the treatment recommendation of the primary reference using the system of assessment, as found in the secondary reference, in order to gain the commonly understood benefits of such adaptation, such as decreased size, increased reliability, simplified operation, and reduced cost. This update would be accomplished with no unpredictable results. REGARDING CLAIM 3 De Vries and Monaghan disclose the limitation of claim 1. De Vries does not explicitly disclose, however Monaghan further discloses: The system of claim 1, wherein performing the one or more validations comprises performing one or more of a technical validation, an analytical validation, or a clinical validation (Monaghan at [0084] teaches validation datasets (interpreted by examiner as an analytical validation)). REGARDING CLAIM 4 De Vries and Monaghan disclose the limitation of claim 1. Monaghan does not explicitly disclose, however De Vries further discloses: The system of claim 3, wherein performing the technical validation comprises comparing the dataset generated by the instrumentation asset to specifications of one or more devices used to capture the measurement of interest (De Vries at [009] teaches comparing the received healthcare information data to healthcare data characterizing a predetermined set of healthcare parameters for an aggregated population of patients and [0036] teaches comparing the received data (interpreted by examiner as the dataset generated by the instrumentation asset) to aggregated healthcare data that characterizes a predetermined set of healthcare parameters (interpreted by examiner as the specifications of one or more devices used to capture the measurement of interest)). REGARDING CLAIM 5 De Vries and Monaghan disclose the limitation of claim 1. De Vries does not explicitly disclose, however Monaghan further discloses: The system of claim 3, wherein performing the analytical validation comprises: determining any of reliability, specificity, or sensitivity metrics for the dataset; and comparing the reliability, specificity, or sensitivity metrics to a threshold value (Monaghan at [0084] teaches a validation dataset and that performance is measured by looking at several metrics. [0058] teaches the prediction system may determine the disease prediction ML model is trained based on the accuracy of the trained disease prediction ML model using a certain threshold. [0102] teaches AUROC may measure the rate of true and false positives classified by the prediction model across probability thresholds. Recall (sensitivity) may measure the rate of true positives classified by the model at a specified threshold (interpreted by examiner as wherein performing the analytical validation comprises: determining any of reliability, specificity, or sensitivity metrics for the dataset and comparing the reliability, specificity, or sensitivity metrics to a threshold value)). REGARDING CLAIM 6 De Vries and Monaghan disclose the limitation of claim 1. Monaghan does not explicitly disclose, however De Vries further discloses: The system of claim 3, wherein performing the clinical validation comprises: assessing treatment effects on measurements of interest for the disease (De Vries at [0053] teaches the treatment recommendation can be determined based on the answers to the questions of the questionnaire data described above. For example, if the answers from the questionnaire indicate that a reason for patient nonadherence to a prescribed treatment plan is that the prescribed treatment causes undesirable side effects, the one or more treatment recommendation algorithms will determine a recommendation for an alternative treatment that does not cause these side effects based on an assessment of received data characterizing expert clinical knowledge, existing clinical guidelines and other third party data sources (interpreted by examiner as assessing treatment effects on measurements of interest for the disease)). REGARDING CLAIM 7 De Vries and Monaghan disclose the limitation of claim 1. Monaghan does not explicitly disclose, however De Vries further discloses: The system of claim 1, wherein the digital measurement solution is previously validated by implementing one or more qualification protocols used to establish comparability of solutions across the digital measurement solutions of the target solution profile (De Vries at [0064] teaches the software platform can then determine an optimal timing (e.g., Monday mornings), format (e.g., fax to office), and frequency (e.g., once a week) for delivering the treatment recommendations, in order to maximize the chance of the recommendations being implemented (interpreted by examiner as implementing one or more qualification protocols)). REGARDING CLAIM 8 Claim 8 is analogous to Claim 1 thus Claim 8 is similarly analyzed and rejected in a manner consistent with the rejection of Claim 1. REGARDING CLAIM 9 De Vries and Monaghan disclose the limitation of claim 1. De Vries does not explicitly disclose, however Monaghan further discloses: The system of claim 8, wherein validating the dataset comprises: determining whether a characteristic of the dataset satisfies a threshold value of the target solution profile; and responsive to the determination that the characteristic of the dataset satisfies the threshold value, validating the digital measurement solution as achieving comparability of solutions (Monaghan [0059] teaches for example, using a 95% threshold, the trained disease prediction ML data was able to have a 60% accuracy of the predicted positive patients that are actually positive for COVID, 2% of all positive patients are identified as having COVID, 99.9% of negative patients are labelled as negative, 10.8 times more patients are identified as having COVID than if the sampling test dataset was random, and flags .2% of the test population (e.g., the test data). Using an 80% threshold, the trained disease prediction ML data was able to have a 36% accuracy of the predicted positive patients that are actually positive for COVID, 23% of all positive patients are identified as having COVID, 97.6 % of negative patients are labelled as negative, 6.5 times more patients are identified as having COVID than if the sampling test dataset was random, and flags 3.5% of the test population (e.g., the test data). [0098] teaches that a threshold score for generating a positive prediction for a patient may be adjusted based on balancing the capability of the model to detect true positive cases versus the desire to avoid false positives (interpreted by examiner as determining whether a characteristic of the dataset satisfies a threshold value of the target solution profile and means for validating the digital measurement solution as achieving comparability of solutions responsive to the determination that the characteristic of the dataset satisfies the threshold value)). REGARDING CLAIM 10 De Vries and Monaghan disclose the limitation of claim 1. Monaghan does not explicitly disclose, however De Vries further discloses: The system of claim 9, wherein validating the dataset further comprises responsive to determining that the digital measurement solution achieves comparability of solutions, storing an indication of a successful validation in metadata of the digital measurement solution (De Vries at [0026] teaches having flexibility of being able to store any kind or type of data (interpreted by examiner as means to store successful validation in metadata)). REGARDING CLAIM 11 De Vries and Monaghan disclose the limitation of claim 1. Monaghan does not explicitly disclose, however De Vries further discloses: The system of claim 10, wherein the metadata of the digital measurement solution is stored in a catalog accessible for inspection by third party users (De Vries at [0050] [0067] teaches data loading from multiple data sources, data extraction and normalization, aggregation and storage of data (interpreted by examiner as means to inspect)). REGARDING CLAIM 12 De Vries and Monaghan disclose the limitation of claim 1. De Vries does not explicitly disclose, however Monaghan further discloses: The system of claim 8, wherein the specification of the digital measurement solution represents an upgraded capability in comparison to a prior version of the digital measurement solution (Monaghan at [0008] teaches training machine learning model (interpreted by examiner as upgraded capability in comparison to a prior version of the digital measurement solution)). REGARDING CLAIM 13 De Vries and Monaghan disclose the limitation of claim 1. De Vries does not explicitly disclose, however Monaghan further discloses: The system of claim 12, wherein the specification of the digital measurement solution represents an upgraded capability included in a newly released device used to capture the measurement of interest (Monaghan at [0008] teaches training machine learning model (interpreted by examiner as upgraded capability included in a newly released device used to capture the measurement of interest)). REGARDING CLAIM 14 De Vries and Monaghan disclose the limitation of claim 1. De Vries does not explicitly disclose, however Monaghan further discloses: The system of claim 13, wherein the upgraded capability is one of an upgraded battery, upgraded data storage, upgraded acquisition frequency, or upgraded data collection algorithm (Monaghan at [0008] teaches training machine learning model (interpreted by examiner as upgraded data collection algorithm)). REGARDING CLAIM 15 De Vries and Monaghan disclose the limitation of claim 1. Monaghan does not explicitly disclose, however De Vries further discloses: The system of claim 1, wherein the common class of the plurality of digital measurement solutions represents a common method of measuring activity from an individual (De Vries at [0020] teaches measuring health parameters and [0043] teaches physical measurement and physical activity). REGARDING CLAIM 16 De Vries and Monaghan disclose the limitation of claim 1. Monaghan does not explicitly disclose, however De Vries further discloses: The system of claim 15, wherein the common method of measuring activity is the use of a class of devices comprising one or more of wearable devices, devices including accelerometers, devices including gyroscopes, ingestibles, image and voice based devices, touchless sensors, and sensor based smart devices (De Vries at [0029] teaches wearable devices). REGARDING CLAIM 17 De Vries and Monaghan disclose the limitation of claim 1. Monaghan does not explicitly disclose, however De Vries further discloses: The system of claim 1, wherein the instrumentation asset comprises a machine learning algorithm that transforms data captured according to the measurement definition to the dataset (De Vries at [0060] and [0061] teaches use of machine learning models). REGARDING CLAIM 18 Claim 18 is analogous to Claims 1-17 thus Claim 18 is similarly analyzed and rejected in a manner consistent with the rejection of Claims 1-17. Response to Arguments Rejection under 35 U.S.C. § 101 Regarding the rejection of claims 1, 3-18, the Examiner has considered the Applicant’s arguments, but does not find them persuasive. Applicant argues: In contrast, the amended claims include multiple steps that cannot be reasonably characterized as human behaviors or activities. Claim 1 is directed to a system for characterizing a disease of a subject through measurement stacks of multiple digital components including a digital measurement solution. Further, the amended claim recites particular digital assets including, e.g., a digital measurement solution, an instrumentation asset, an evidence asset, and so on. Further, the amended claims include limitations that simply would not be performed for the purpose of managing human personal behavior or relationships or interactions between people, as they are directed to the classification of disease based on digital measurement data. Regarding 1, The Examiner respectfully disagrees. The claims as drafted are a processes that, under the broadest reasonable interpretation, covers certain methods of organizing human activity (i.e., managing personal behavior including following rules or instructions) but for recitation of generic computer components. A person can follow a set of rules and instructions to obtain a measurement from a subject, select a digital measurement solution and apply the digital measurement solution. The Examiner notes that certain “method[s] of organizing human activity” includes a person’s interaction with a computer (see MPEP 2106.04(a)(2)(II)). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or interactions between people but for the recitation of generic computer components, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Even if the claims are found to recite an abstract idea under Step 2A, Prong One of the Alice/Mayo Test, the claims are not "directed to" the abstract idea because they integrate the abstract idea into a practical application. Viewing the claims as a whole, the claims are directed to a practical application a computer-implementation system for characterizing a disease of a subject through measurement stacks of multiple digital components including a digital measurement solution. An applicant can demonstrate such integration when the claimed invention improves "functioning of a computer, or an improvement to other technology or technical field."… The specification explains how the claimed features permit an ecosystem that "allow[s] for harmonization between multiple assets and components", "allow[s] for improved life cycle management" of digital measurement solutions, and "accelerates the adoption of digital measures and long-term research interoperability (e.g., interoperability across different clinical trials) first in clinical research and additionally in clinical care". Specification at [0004]… Regarding 2, the Examiner respectfully disagrees. The claims do not provide a practical application. The additional element of one or more processors that performs the abstract idea is not defined by the Applicant and is recited at a high-level of generality (i.e., a generic computer processor for performing generic computer functions, see Spec. Para. 00171-00174 and 00179) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Furthermore, the additional element of an instrumentation asset and a particular device are recited at a high level of generality (i.e. a general means to output/receive/transmit/measure data) and amount to extra solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. And so the claims cannot not provide an improvement to the functioning of a computer nor to technical field as the additional elements are a generic computer component/recite at high level generality and cannot provide an improvement. For at least the reasons above, the use of a machine learning model in the independent claims is not "merely a tool" for performing the alleged abstract idea. Unlike ineligible patent claim examples in which a generic computer merely performs the steps of alleged abstract idea, the present claims recite numerous steps used to train to a machine learning model based on the improved techniques described in the specification, thus imposing meaningful limits on the claimed processes. Regarding 3, the Examiner respectfully disagrees. The additional element of “machine learning algorithm” which is interpreted as (“apply it’) the abstract idea. MPEP 2106.04(d)(I) indicates that merely saying “apply it” or equivalent to the abstract idea cannot provide a practical application and MPEP2106.05(1)(A) indicates that merely saying “apply it’ or equivalent to the abstract idea cannot provide an inventive concept (“significantly more’). As such the claim is not patent eligible. Rejection under 35 U.S.C. § 103 Regarding the rejection of claims 1, 3-18, the Examiner has considered the Applicant’s arguments, but does not find them persuasive. Applicant argues: Thus, Applicant submits that the cited documents, whether taken alone or in combination, assuming such could be combined, fail to disclose or suggest important features of Applicant's invention as recited in Claim 1. Applicant therefore respectfully submits that the rejection under 35 U.S.C. § 103 of Claim 1 should be withdrawn. Regarding 1, The Examiner respectfully disagrees. De Vries teaches obtaining a measurement of interest from the subject…, an instrumentation asset that is device technology agnostic, wherein (i) the instrumentation asset of the selected digital measurement solution specifies the measurement method …, and (ii) the measurement method specifies a particular device for capturing raw data (De Vries at [0020]), wherein the digital measurement solution comprises: a measurement definition defining one or more concepts of interest relevant to the disease (De Vries at [0030] and [0066]); and wherein applying the selected digital measurement solution comprises applying the instrumentation asset to the measurement of interest to transform raw data into a dataset that is informative for characterizing the disease (De Vries at [006] and [0028]). Moreover, Monaghan teaches selecting a digital measurement solution from a plurality of digital measurement solutions, wherein the plurality of digital measurement solutions are of a common class that is represented by a target solution profile and wherein the target solution profile represents a generalization of the plurality of digital measurement solutions; and applying the selected digital measurement solution to the obtained measurement of interest to characterize the disease for the subject (Monaghan at [0094]), and an evidence asset for performing one or more validations on a dataset generated by the instrumentation asset (Monaghan at [0084]). Given the broadest reasonable interpretation, the cited references in combination teach the claimed features. Conclusion Applicant’s amendment necessitated the new grounds of rejection presented in this Office action. THIS ACTION IS MADE FINAL. See MPEP §706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record though not relied upon in the present basis of rejection are noted in the attached PTO 892 and include: Lin (US 2017/0181669) teaches graphene-based nanosensor for identifying target analytes. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIZA TONY KANAAN whose telephone number is (571)272-4664. The examiner can normally be reached on Mon-Thu 9:00am-6:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Morgan can be reached on 571-272-6773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from the Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docs for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LIZA TONY KANAAN/Examiner, Art Unit 3683 /ROBERT W MORGAN/Supervisory Patent Examiner, Art Unit 3683
Read full office action

Prosecution Timeline

Nov 03, 2023
Application Filed
Jul 12, 2025
Non-Final Rejection — §101, §103
Oct 16, 2025
Response Filed
Jan 09, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586689
UI DESIGN FOR PATIENT AND CLINICIAN CONTROLLER DEVICES OPERATIVE IN A REMOTE CARE ARCHITECTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12580063
METHODS AND SYSTEMS FOR RADIOTHERAPY TREATMENT PLANNING BASED ON DEEP TRANSFER LEARNING
2y 5m to grant Granted Mar 17, 2026
Patent 12288606
REHABILITATION SYSTEM AND IMAGE PROCESSING APPARATUS FOR HIGHER BRAIN DYSFUNCTION
2y 5m to grant Granted Apr 29, 2025
Patent 12170146
OMNICHANNEL THERAPEUTIC PLATFORM
2y 5m to grant Granted Dec 17, 2024
Patent 12040058
SYSTEMS AND METHODS FOR PROVIDING CLINICAL TRIAL STATUS INFORMATION FOR PATIENTS
2y 5m to grant Granted Jul 16, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
23%
Grant Probability
58%
With Interview (+35.3%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 115 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month