Prosecution Insights
Last updated: April 19, 2026
Application No. 17/344,510

COMPUTER SYSTEMS AND METHODS FOR MACHINE-LEARNING BASED TREATMENT MODELING FOR ONCOLOGY BASED ON INCONSISTENT RAS BIOMARKER DETECTION DATA RECORDS

Non-Final OA §101§103§DP
Filed
Jun 10, 2021
Examiner
ANDERSON-FEARS, KEENAN NEIL
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
Optum Inc.
OA Round
3 (Non-Final)
6%
Grant Probability
At Risk
3-4
OA Rounds
5y 1m
To Grant
56%
With Interview

Examiner Intelligence

Grants only 6% of cases
6%
Career Allow Rate
1 granted / 16 resolved
-53.7% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
45 currently pending
Career history
61
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
33.2%
-6.8% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Applicant's response, filed 1/12/2026, has been fully considered. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/12/2026 has been entered. Priority It is noted that the instant application does not claim the benefit of priority to any earlier filed applications and thus the effective filing date of claims 1-3, 5, 7-10, 13-16 and 19-23 is 6/10/2021. Claim Status Claims 1-3, 5, 7-10, 13-16 and 19-23 are pending. Claims 1-3, 5, 7-10, 13-16 and 19-23 are rejected. Claims 21-23 are newly added. Claim Rejections - 35 USC § 101 Response to Amendment In view of applicant’s amendments, previous claim rejections under 35 USC § 101 have been updated and a response to applicant’s arguments is provided following said updates. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5, 7-10, 13-16 and 19-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. The claims recite a method, system, and CRM for reconciling potentially inconsistent data sets for use with computer-implemented data models. The judicial exception is not integrated into a practical application because while claims 1-3, 5, 7-10, 13-16 and 19-23 attempt to integrate the exception into a practical application, said application is either generically recited computer elements that do not add a meaningful limitation to the abstract idea or it is insignificant extra solution activity and merely implementing the abstract idea on a computer. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the computer elements only store and retrieve information in memory as well as perform basic calculations that are known to be well-understood, routine and conventional computer functions as recognized by the decisions listed in MPEP § 2106.05(d). Framework with which to Analyze Subject Matter Eligibility: Step 1: Are the claims directed to a category of stator subject matter (a process, machine, manufacture, or composition of matter)? [See MPEP § 2106.03] Claims are directed to statutory subject matter, specifically a method (Claims 1-8), a system (Claims 9-14) and a CRM (Claims 15-20) Step 2A Prong One: Do the claims recite a judicially recognized exception, i.e., an abstract idea, a law of nature, or a natural phenomenon? [See MPEP § 2106.04(a)] The claims herein recite abstract ideas, specifically mental processes and mathematical concepts. With respect to the Step 2A Prong One evaluation, the instant claims are found herein to recite abstract ideas that fall into the grouping of mental processes and mathematical concepts. Claims 1, 9 and 15: Generating an input data set, filtering data to identify inconsistent timestamp and biomarker mutation indicators, filtering data to identify inconsistent mutation status or progression, generating a plurality of biomarker mutation indicators, initiating preprocessing steps, and generating an output based on the derived biomarker mutation indicator are processes of aggregating, comparing/contrasting and selecting data that can be done with a pen and paper or in the human mind and are therefore abstract ideas, specifically mental processes. The observation data comprising a biomarker mutation indicator and a timestamp, and a derived biomarker mutation indicator comprising a first derived indicator based on the observation data, are directed to the information itself which is an abstract idea, and are therefore abstract ideas, specifically mental processes. Claim 2: The input data set comprising a flat input data file with the validated subset is directed to the information itself which is an abstract idea, and is therefore an abstract idea, specifically a mental process. Claims 3, 10, and 16: The first filter being configured to eliminate inconsistent observation records is a process of comparing/contrasting and selecting data that can be done with a pen and paper or in the human mind and are therefore abstract ideas, specifically mental process. Claim 5: Retrieving the relevant data pre-processing methodology and generating input data using said methodology are processes of aggregating, comparing/contrasting and selecting data that can be done with a pen and paper or in the human mind and are therefore abstract ideas, specifically mental processes. Claims 7, 13, and 19: The filtering being one or more of a date-based filter, a source-based filter, or a content filter are processes of comparing/contrasting and selecting data that can be done with a pen and paper or in the human mind and are therefore abstract ideas, specifically mental processes. Claims 8, 14, and 20: The machine learning model being a linear regression model is merely a verbal articulation of a mathematical process and therefore an abstract idea, specifically a mathematical concept. Claims 21, 22, and 23: Merging the validated subset of the plurality of observation data records and an external record is a process of combining data that can be done with a pen and paper or within the human mind and is therefore an abstract idea, specifically a mental process. Step 2A Prong Two: If the claims recite a judicial exception under prong one, then is the judicial exception integrated into a practical application? [See MPEP § 2106.04(d) and MPEP § 2106.05(a)-(c) & (e)-(h)] Because the claims do recite judicial exceptions, direction under Step 2A Prong Two provides that the claims must be examined further to determine whether they integrate the abstract ideas into a practical application. The following claims recite the following additional elements in the form of non-abstract elements: Claim 1: A computer and processors are generic and nonspecific elements of a computer that do not improve the functioning of any computer or technology described herein [See MPEP § 2106.04(d)(1) and MPEP § 2106.05(d)]. Retrieving a plurality of observation data records is an insignificant extra solution activity, specifically mere data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Training a machine learning model is merely reciting instructions to apply an exception as the claim only recites the idea of a solution or an outcome without a solution as to how a technical problem is addressed (See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016), Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016), and Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015)) [See MPEP § 2106.05(f)]. Claim 9: A system, processors, memory, and executable instructions, are generic and nonspecific elements of a computer that do not improve the functioning of any computer or technology described herein [See MPEP § 2106.04(d)(1) and MPEP § 2106.05(d)]. Retrieving a plurality of observation data records is an insignificant extra solution activity, specifically mere data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Training a machine learning model is merely reciting instructions to apply an exception as the claim only recites the idea of a solution or an outcome without a solution as to how a technical problem is addressed (See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016), Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016), and Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015)) [See MPEP § 2106.05(f)]. Claim 15: Non-transitory computer readable media, instructions, and processors, are generic and nonspecific elements of a computer that do not improve the functioning of any computer or technology described herein [See MPEP § 2106.04(d)(1) and MPEP § 2106.05(d)]. Retrieving a plurality of observation data records is an insignificant extra solution activity, specifically mere data gathering (See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)) [See MPEP § 2106.05(g)]. Training a machine learning model is merely reciting instructions to apply an exception as the claim only recites the idea of a solution or an outcome without a solution as to how a technical problem is addressed (See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016), Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016), and Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015)) [See MPEP § 2106.05(f)]. Claims 21, 22, and 23: Training a machine learning model is merely reciting instructions to apply an exception as the claim only recites the idea of a solution or an outcome without a solution as to how a technical problem is addressed (See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016), Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016), and Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015)) [See MPEP § 2106.05(f)]. Step 2B: If the claims do not integrate the judicial exception, do the claims provide an inventive concept? [See MPEP § 2106.05] Because the additional claim elements do not integrate the abstract idea into a practical application, the claims are further examined under Step 2B, which evaluates whether the additional elements, individually and in combination, amount to significantly more than the judicial exception itself by providing an inventive concept. The claims do not recite additional elements that are sufficient to amount to significantly more than the judicial exception because the claims recite additional elements that are generic, conventional or nonspecific. These additional elements include: The additional elements of a system, processors, memory, non-transitory computer readable media, instructions, and executable instructions, are generic and nonspecific elements of a computer that are well-understood, routine and conventional within the art and therefore do not improve the functioning of any computer or technology described therein (See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information), Performing repetitive calculations, Flook, 437 U.S. at 594, 198 USPQ2d at 199 (recomputing or readjusting alarm limit values), and Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015)) [See § MPEP 2106.05(d)(II)]. Therefore, taken both individually and as a whole, the additional elements do not amount to significantly more than the judicial exception by providing an inventive concept. The additional element of retrieving a plurality of observation data records (Conventional: See Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering), Performing clinical tests on individuals to obtain input for an equation, In re Grams, 888 F.2d 835, 839-40; 12 USPQ2d 1824, 1827-28 (Fed. Cir. 1989) and Determining the level of a biomarker in blood, Mayo, 566 U.S. at 79, 101 USPQ2d at 1968. See also PerkinElmer, Inc. v. Intema Ltd., 496 Fed. App'x 65, 73, 105 USPQ2d 1960, 1966 (Fed. Cir. 2012) (assessing or measuring data derived from an ultrasound scan, to be used in a diagnosis)), Training a machine learning model (Conventional: Udousoro et al. Subsections 2 and 3) is merely reciting instructions to apply an exception as the claim only recites the idea of a solution or an outcome without a solution as to how a technical problem is addressed (See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016), Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016), and Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015)) [See MPEP § 2106.05(f)]. Therefore, taken both individually and as whole, the additional elements do not amount to significantly more than the judicial exception by providing an inventive concept. Therefore, claims 1-3, 5, 7-10, 13-16 and 19-20, when the limitations are considered individually and as a whole, are rejected under 35 USC § 101 as being directed to non-statutory subject matter. Response to Arguments Applicant's arguments filed 1/12/2026 have been fully considered but they are not persuasive. Applicant asserts on page 12 of the Remarks filed 1/12/2026 in regards Step 2A Prong One, that the human mind is not capable of performing the recited limitations of claims 1, 9, and 15, specifically pointing the “receiving, by one or more processors…”, “initiating, by one or more processors…”, “executing, by one or more subprocesses…”, “generating, by one or more processors…”, and “training, by one or more processors…”. However, examiner reminds applicant that 2106.04(a)(2) subsection III states The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation…Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, "[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015) – i.e. just because a claim uses computers does not mean it is directed to computers and just because a judicial exception is implemented by computers does not mean it is not performable by the human mind with or without additional aid. Applicant asserts on page 13 and 14 of the Remarks filed 1/12/2026 that in regards Step 2A Prong Two, the judicial exception is integrated into a practical application citing specifically an improvement to technology referencing paragraph [0022]-[0024] of the specification. On page 9 of the Remarks applicant asserts that the claims conform to an example within Desjardins specifying the inventions improvement to model accuracy and speed in the rectification of data records. On page 15 applicant ties this together into a claim that the improvement is to the machine learning model “inconsistencies in observation data may impact the generation and/or training of machine-learning models, such that later implementation of the trained models is incapable of generating precise data outputs that can be relied upon for user decision making”. However, examiner reminds applicant that while the training of the machine learning model may be an additional element, the improvement that applicant is pointing to is an improvement to the precision of the judicial exception, the generating of the prediction regarding severity, and MPEP 2106.05(a) points out It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. Additionally, within the Desjardins examples the improvement came from the overcoming of “catastrophic forgetting” that is part and parcel of continual learning systems, not an improvement to the models accuracy. Finally, on page 16 of the Remarks filed 1/12/2026 in regards Step 2B, applicant asserts that the combination of additional elements is not conventional. However, applicant has not provided any factual evidence to the contrary and the factual evidence provided in the above rejection supports that the additional elements are conventional. Claim Rejections - 35 USC § 103 Response to Amendment In view of applicant’s amendments, previous claim rejections under 35 USC § 103 have been updated, including performance of a new search with newly cited prior art, and a response to applicant’s arguments is provided following said updates. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2-3, 5, 7, 9-10, 13, 15-16, 19, and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Bess et al. (US 10572461 B2; previously cited) in view of Tucker et al. (US 20180089376 A1; previously cited), Potter (US 20170293734 A1; previously cited), Weiskopf et al. (Journal of the American Medical Informatics Association (2013) 144-151; newly cited), Malley et al. (Secondary Analysis of Electronic Health Records (2016) 115-141; previously cited), and Kogan et al. (BMC Medical Informatics Decision Making (2020) 1-8; newly cited). Claim 1 is directed to a method for intake and reconciliation of inconsistent data sets for use with machine learning models for predicting health outcomes using a variety of filtering techniques. Claim 9 is directed to a system for intake and reconciliation of inconsistent data sets for use with machine learning models for predicting health outcomes using a variety of filtering techniques. Claim 15 is directed to a CRM for intake and reconciliation of inconsistent data sets for use with machine learning models for predicting health outcomes using a variety of filtering techniques. Bess et al. teaches in paragraphs [0171] “a filter can be provided which allows all or a subset of users to be selected. As another example, a filter can be provided which allows a time period to be selected. In yet another example, a filter can be provided which allows actions associated with a particular software application (module) to be selected. In a further example, a filter can be provided which allows a user to select from one or more types of actions performed, such as an update or a merge. The filters can be selected alone or in combination with one another”, reading on generating, by one or more processors, an input data set comprising a validated subset of the plurality of observation data records that comprises the observation data record, by applying a first filter and a second filter. Tucker et al. teaches in paragraph [0008] “receiving the medical data in one or more formats from a plurality of sources over a network, the medical data comprising a plurality of events associated with one or more patients, converting the medical data from the one or more formats to a standardized data format, storing the standardized data in a database, receiving a query comprising at least one patient characteristic selected from a group comprising a patient identifier, a biomarker, a status, a drug and line of therapy combination, an age of patient at diagnosis, and a date of diagnosis”, and in paragraph [0043] “In one embodiment, a query for the data may include patient characteristics, such as patient identifier (ID), biomarker status, stage, drug/line combination, lines of therapy, age range at advanced diagnosis, date of advanced diagnosis, where did the test sample come from, details on the actual Epidermal Growth Factor Receptor (EGFR) mutation, where was the test tissue collected from (for cancer tests), type of assay, like straining intensity, if metastasized and if spread (for cancer patients), etc”, reading on receiving, by one or more processors, a plurality of observation data records, wherein an observation data record of the plurality of observation data records comprises (i) a biomarker mutation indicator that indicates a mutation status of at least one gene and (ii) a timestamp. Tucker et al. teaches in paragraph [0051] “According to one embodiment, the user may be provided one or more filters. For example, data filtering may allow the user to refine the received data even further to include only the data needed for a specific task and to exclude data that can be repetitive or irrelevant”, reading on a first filter and a second filter. Tucker et al. teaches in paragraph [0069] “In the exemplary embodiment of FIG. 6, interface includes biomarker status selector, stage selector, drug input, line input, age range selector, date range selector, and find patient button”, reading on comprising a corresponding timestamp, and a biomarker mutation indicator. Malley et al. teaches on page 117 in paragraph 4 “Knowledge engineering tools may also be used to detect the violation of known data constraints. For example, known functional dependencies among attributes can be used to find values contradicting the functional constraints”, on page 118, paragraph 1 “The same information is often entered in different formats by these different sources”, and on page 118, paragraph 3 “In order to produce an accurate dataset for analysis, the goal is for each patient to have the same event represented in the same manner for analysis. As such, dealing with inconsistency perfectly would usually have to happen at the data entry or data extraction level. However, as data extraction is imperfect, pre-processing becomes important. Often, correcting for these inconsistencies involves some understanding of how the data of interest would have been captured in the clinical setting and where the data would be stored in the EHR database”, reading on configured to identify and eliminate an inconsistent observation data record of the plurality of observation data records that fails to satisfy at least one of the first filter or the second filter. Weiskopf et al. teaches on page 145, column 2, paragraph 3 “Concordance: Is there agreement between elements in the EHR, or between the EHR and another data source?”, on page 147, column 1, paragraph 6 “Measurement of concordance is generally based on elements contained within the EHR”, and on page 147, column 2, paragraph 1 “The most common approach to assessing concordance was to look at agreement between elements within the EHR, especially diagnoses and associated information such as medications or procedures. The second most common method used to assess concordance was to look at the agreement of HER data with data from other sources. These other sources included billing information, paper records, patient-reported data, and physician-reported data. Another approach was to compare distributions of data within the EHR with distributions of the same information from similar medical practices or with national rates”, reading on configured to identify and eliminate an inconsistent observation data record of the plurality of observation data records that fails to satisfy at least one of the first filter or the second filter. Potter et al. teaches in paragraph [0070] “In some examples, results and evaluations from previously processed reports are applied as feedback in a machine learning process utilizing the Medical Condition Database to refine and enhance the system-determined risk levels”, in paragraph [0103] “Additionally or alternatively, machine learning methods may be applied to refine the clinical severity of individual certain clinical cues”, and in paragraph [0117] “Additionally or alternatively, determined risk levels can be reused as feedback in a machine learning module, such that the risk is further refined by past risk determinations”, reading on generating, by one or more processors, a plurality of derived biomarker mutation indicators for the validated subset wherein a derived biomarker mutation indicator of the plurality of derived biomarker mutation indicators comprises a first derived biomarker mutation indicator generated based at least in part on the biomarker mutation indicator of the observation data record, and generating, by one or more processors and using a machine learning model, an output for the input data set based at least in part on the first derived biomarker mutation indicator of the validated subset. Kogan et al. teaches in the abstract “Stroke severity is an important predictor of patient outcomes and is commonly measured with the National Institutes of Health Stroke Scale (NIHSS) scores. Because these scores are often recorded as free text in physician reports, structured real-world evidence databases seldom include the severity. The aim of this study was to use machine learning models to impute NIHSS scores for all patients with newly diagnosed stroke from multi-institution electronic health record (EHR) data. NIHSS scores available in the Optum© de-identified Integrated Claims-Clinical dataset were extracted from physician notes by applying natural language processing (NLP) methods. The cohort analyzed in the study consists of the 7149 patients with an inpatient or emergency room diagnosis of ischemic stroke, hemorrhagic stroke, or transient ischemic attack and a corresponding NLP-extracted NIHSS score. A subset of these patients (n = 1033, 14%) were held out for independent validation of model performance and the remaining patients (n = 6116, 86%) were used for training the model”, reading on training, by the one or more processors and using the validated subset of the plurality of observation data records, a machine learning model configured to generate a severity output based on the first derived biomarker mutation indicator of the validated subset. It would have been obvious at the time of invention to a person skilled in the art to modify the teachings of Bess et al. that teach a method for managing patient information databases, with the teachings of Tucker et al. for visualizing medical data from databases, with the teachings of Potter et al. for the use of biomarkers to identify inconsistent patient records, specifically using both Weiskopf et al. and Malley et al. which provide justification in that both teach the importance of data concordance, particularly in EHR or medical databases (Weiskopf et al. page 147, col 1-2, & Malley et al. pages 117-118, and 120). Additionally, it would have been obvious to combine these with the teachings of Kogan et al. for the training of a machine learning model for the prediction of severity outcomes as Kogan et al. teaches in the abstract “Leveraging machine learning we identified the main factors in electronic health record data for assessing stroke severity, including death within the same month as stroke occurrence, length of hospital stay following stroke occurrence, aphagia/dysphagia diagnosis, hemiplegia diagnosis, and whether a patient was discharged to home or self-care. Comparing the imputed NIHSS scores to the NLP-extracted NIHSS scores on the holdout data set yielded an R2 (coefficient of determination) of 0.57, an R (Pearson correlation coefficient) of 0.76, and a root-mean squared error of 4.5. Machine learning models built on EHR data can be used to determine proxies for stroke severity. This enables severity to be incorporated in studies of stroke patient outcomes using administrative and EHR databases”. Weiskopf et al. and Malley et al. specifically teach the importance of data concordance, particularly related EHR data sets “Research using electronic health records (EHR) often involves the secondary analysis of health records that were collected for clinical and billing (non-study) purposes and placed in a study database via automated processes. Therefore, these databases can have many quality control issues. Pre-processing aims at assessing and improving the quality of data to allow for reliable statistical analysis”. Bess et al. and Tucker et al. expand upon some of the principles mentioned in Weiskopf et al. and Malley et al., specifically the merging and filtering of data, focusing on particular methods of filtering, by date, content, source, biomarker etc. Finally, Potter et al. provides the machine learning method for creating a type of scoring for the biomarkers and thereby data sets, a means of filtering data. One would have had a reasonable expectation of success given that all methods are directed to managing and visualizing data, specifically in the context of filtering data, with both Weiskopf et al. and Malley et al. impressing the importance of concordance in this context. Potter et al. is the only outlier in this regard, however it too is directed to “identifying significant incidental findings from medical records”, which an individual skilled in the art would identify as applicable to identifying concordance within datasets. Therefore, it would have been obvious at the time of invention to a person skilled in the art to modify the teachings of each and to be successful. Claim 2 is directed to the method of claim 1 but further specifies that the generated input data set comprise a flat input data file. Malley et al. teaches on page 120, paragraph 3 “Lastly, as part of more effective organization of datasets, one would also aim to reshape the columns and rows of a dataset so that it conforms with the following 3 rules of a “tidy” dataset: 1. Each variable forms a column, 2. Each observation forms a row, 3. Each value has its own cell”, which is a basic flat table otherwise known as a flat data file, reading on wherein generating the input data set comprises generating a flat input data file comprising the validated subset. Claim 3 is directed to the method of claim 1 but further specifies that the first filter is configured to eliminate the inconsistent observation data based on the biomarker mutation indicator conflicting between reports. Claim 10 is directed to the system of claim 9 but further specifies that the first filter is configured to eliminate the inconsistent observation data based on the biomarker mutation indicator conflicting between reports. Claim 16 is directed to the CRM of claim 15 but further specifies that the first filter is configured to eliminate the inconsistent observation data based on the biomarker mutation indicator conflicting between reports. Weiskopf et al. teaches on page 145, column 2, paragraph 3 “Concordance: Is there agreement between elements in the EHR, or between the EHR and another data source?”, on page 147, column 1, paragraph 6 “Measurement of concordance is generally based on elements contained within the EHR”, and on page 147, column 2, paragraph 1 “The most common approach to assessing concordance was to look at agreement between elements within the EHR, especially diagnoses and associated information such as medications or procedures. The second most common method used to assess concordance was to look at the agreement of HER data with data from other sources. These other sources included billing information, paper records, patient-reported data, and physician-reported data. Another approach was to compare distributions of data within the EHR with distributions of the same information from similar medical practices or with national rates”. Tucker et al. teaches in paragraph [0069] “In the exemplary embodiment of FIG. 6, interface includes biomarker status selector, stage selector, drug input, line input, age range selector, date range selector, and find patient button”, which in view of the above Weiskopf et al. passages, reads on wherein the first filter is further configured to eliminate the inconsistent observation data record and the second observation data record responsive to the inconsistent observation data record comprising the first biomarker mutation indicator that conflicts with the second biomarker mutation indicator of the second observation data record. Claim 5 is directed to the method of claim 1, but further specifies that the generating of the input data set is performed in accordance with relevant data pre-processing methodology relating to colon cancer or rectal cancer, and the retrieval of relevant data pre-processing methodology from a set of methodologies. Bess et al. teaches in the abstract “The inverted index formulation enables faster, more complete and more flexible duplicate detection as compared to traditional master patient database management techniques. A master patient index management system including a remote user interface configured to leverage the inverted index formulation is described. The user interface includes features for managing records in an MPI database including identifying, efficiently comparing, updating and merging duplicate records across a heterogeneous healthcare organization”. Tucker et al. teaches in paragraph [0069] “In the exemplary embodiment of FIG. 6, interface includes biomarker status selector, stage selector, drug input, line input, age range selector, date range selector, and find patient button”, which in view of the above Bess et al. passage, reads on wherein generating a model the input data set is performed in accordance with a relevant data pre-processing methodology relating to colon cancer or rectal cancer, and wherein the computer-implemented method further comprises retrieving the relevant data pre-processing methodology from a plurality of data pre-processing methodologies based at least in part on the plurality of independently generated observation data records prior to generating the model input data set. Claim 7 is directed to the method of claim 1 but further specifies a preliminary filter criteria comprising one of a date-based filter, a data source filter, or a data content filter. Claim 13 is directed to the system of claim 9 but further specifies a preliminary filter criteria comprising one of a date-based filter, a data source filter, or a data content filter. Claim 19 is directed to the CRM of claim 15 but further specifies a preliminary filter criteria comprising one of a date-based filter, a data source filter, or a data content filter. Bess et al. teaches in paragraph [0172] “An example of information returned in a search according to selected filters is provided. The information includes a) a user, which is associated with an action on the MPI database, b) a date/time recorded for the action, c) a module which was used to perform the action, d) the action performed, e) a patient name associated with the action (if there is only one, such as for a record modification), f) additional details about the action and g) a button for viewing data about the action”, and in paragraph [0107] “In another embodiment, the scoring weight can be also given to a selected field if the field is known to be more precise when coming from a particular source”, reading on a date-based filter criterion for selecting observation data records generated within a defined date range, a data source filter criterion for selecting observation data records generated by one or more defined data sources, and a data content filter criterion for selecting observation data records containing an identifier selected from a plurality of available identifiers eligible for further analysis. Claim 21 is directed to the method of claim 1 but further specifies the merging of data records and the training using said data of the machine learning model to output a severity based on the biomarkers. Claim 22 is directed to the system of claim 9 but further specifies the merging of data records and the training using said data of the machine learning model to output a severity based on the biomarkers. Claim 23 is directed to the CRM of claim 15 but further specifies the merging of data records and the training using said data of the machine learning model to output a severity based on the biomarkers. Kogan et al. teaches in the abstract “Stroke severity is an important predictor of patient outcomes and is commonly measured with the National Institutes of Health Stroke Scale (NIHSS) scores. Because these scores are often recorded as free text in physician reports, structured real-world evidence databases seldom include the severity. The aim of this study was to use machine learning models to impute NIHSS scores for all patients with newly diagnosed stroke from multi-institution electronic health record (EHR) data. NIHSS scores available in the Optum© de-identified Integrated Claims-Clinical dataset were extracted from physician notes by applying natural language processing (NLP) methods. The cohort analyzed in the study consists of the 7149 patients with an inpatient or emergency room diagnosis of ischemic stroke, hemorrhagic stroke, or transient ischemic attack and a corresponding NLP-extracted NIHSS score. A subset of these patients (n = 1033, 14%) were held out for independent validation of model performance and the remaining patients (n = 6116, 86%) were used for training the model”, which in view of the previous teachings read on merging, by the one or more processors, the validated subset of the plurality of observation data records and one or more externally provided severity observation data records to provide a set of training data; and training, by the one or more processors and using the set of training data, the machine learning model configured to generate the severity output based on the first derived biomarker mutation indicator of the validated subset. Claims 8, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bess et al. (US 10572461 B2; previously cited), Tucker et al. (US 20180089376 A1; previously cited), Potter et al. (US 20170293734 A1; previously cited), Weiskopf et al. (Journal of the American Medical Informatics Association (2013) 144-151; previously cited), Malley et al. (Secondary Analysis of Electronic Health Records (2016) 115-141; previously cited), and Kogan et al. (BMC Medical Informatics Decision Making (2020) 1-8; newly cited) as applied to claims 1, 9, and 15 above, and further in view of Igo et al. (Current protocols in human genetics (2019) 1-12; previously cited). Claim 8 is directed to the method of claim 1 but further specifies that the machine learning model be a linear regression model. Claim 14 is directed to the system of claim 9 but further specifies that the machine learning model be a linear regression model. Claim 20 is directed to the CRM of claim 15 but further specifies that the machine learning model be a linear regression model. Bess et al., Tucker et al., Potter et al., Weiskopf et al., Malley et al., and Kogan et al. teach the method of claim 1, the system of claim 9, and the CRM of claim 15. Bess et al., Tucker et al., Potter et al., Weiskopf et al., Malley et al., and Kogan et al. do not teach that the machine learning model must be a linear regression model. Igo et al. teaches on page 3, column 1, paragraph 2 “In the case of continuous traits, the multiple R2 (squared correlation) from linear regression measures the trait variance accounted for by the predictors. This may be approximated for binary traits by a pseudo-R2 measure, derived from the likelihood under the models with and without genetic predictors (Menard, 2000; Witte et al., 2014), or by the squared empirical correlation”, reading on wherein the machine learning model is a linear regression model. It would have been obvious at the time of invention to a person skilled in the art to modify the teachings of Bess et al., Tucker et al., Potter et al., Weiskopf et al., and Malley et al. for the the method of claim 1, the system of claim 9, and the CRM of claim 15, with the teachings of Igo et al. for the use of linear regression in genetic risk scores, as a basic risk score is merely a weighted linear combination of features, as is the severity model according to paragraph [0107] – [0108] “In certain embodiments, the generated severity data identifies one or more severity attributes that contributes to the severity score, such as a listing of comorbidities of the patient, one or more indications of detected RAS/KRAS/NRAS mutations, one or more markers indicative of the stage of the patient's cancer, and/or the like… The severity models may be configured to generate a severity score that may be indicative of the relative expected treatment cost of a patient's cancer. The severity score may have no associated units (such that the severity score is simply a number that can be compared against other generated severity scores). In other embodiments, the severity score may have an associated unit, such as a cost that may be reflective of a predicted treatment cost associated with treating the patient's cancer. For example, a severity score may be reflective of a predicted medically necessary treatment cost for a patient's cancer, considering the specific circumstances of the particular patient's condition”. One would have had a reasonable expectation of success given that the use of the machine learning technique, in a statistical sense, is applied to a similar problem type. Therefore, it would have been obvious at the time of invention to a person skilled in the art to modify the teachings of each and to be successful. Response to Arguments Applicant's arguments filed 1/12/2026 have been fully considered but they are not persuasive. Applicant asserts on page 18 of the Remarks filed 1/12/2026, that the references fail to teach or suggest the pre-processing process as provided in the claim limitations, focusing on how the previous OA relies on Malley et al. and Weiskopf et al. to identify and eliminate inconsistent observation data records via the assessment of concordance but fails to teach or suggest the two-stage preprocessing method outlined in the claims. However, as both iterated in the previous OA and reiterated in the current OA, Malley et al. is providing the limitation and justification for the identification and elimination of inconsistent data from the records as shown by the following: page 117 in paragraph 4 “Knowledge engineering tools may also be used to detect the violation of known data constraints. For example, known functional dependencies among attributes can be used to find values contradicting the functional constraints”, on page 118, paragraph 1 “The same information is often entered in different formats by these different sources”, and on page 118, paragraph 3 “In order to produce an accurate dataset for analysis, the goal is for each patient to have the same event represented in the same manner for analysis. As such, dealing with inconsistency perfectly would usually have to happen at the data entry or data extraction level. However, as data extraction is imperfect, pre-processing becomes important. Often, correcting for these inconsistencies involves some understanding of how the data of interest would have been captured in the clinical setting and where the data would be stored in the EHR database”. Weiskopf et al. was cited as an additional measurement and justification for the use of such methodologies as shown by the following: page 145, column 2, paragraph 3 “Concordance: Is there agreement between elements in the EHR, or between the EHR and another data source?”, on page 147, column 1, paragraph 6 “Measurement of concordance is generally based on elements contained within the EHR”, and on page 147, column 2, paragraph 1 “The most common approach to assessing concordance was to look at agreement between elements within the EHR, especially diagnoses and associated information such as medications or procedures. The second most common method used to assess concordance was to look at the agreement of HER data with data from other sources. These other sources included billing information, paper records, patient-reported data, and physician-reported data. Another approach was to compare distributions of data within the EHR with distributions of the same information from similar medical practices or with national rates”. Finally, applicant asserts on page 18 of the Remarks filed 1/12/2026, that the references fail to teach or suggest the amended limitation of training of the machine learning model. However, this point is made moot in view of newly recited prior art Kogan et al. which teaches this within the abstract as previously recited. Double Patenting Response to Amendment In view of applicant’s amendments, previous claim rejections under double patenting are not withdrawn. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1 and 3-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6, 9, 11-15, and 17-20 of copending Application No. 17/344,466 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 1 recites a computer-implemented method for automatically modelling colon or rectal cancer severity attributes using multiple independently generated patient data records (pg. 50 lines 1-3), while ‘466 claim 1 recites a computer-implemented method for automatically modelling, but for cancer in general, not specifically colon or rectal cancer (pg. 59 lines 1-3). Claim 1 recites receiving multiple independently generated patient records comprising a biomarker mutation indicator for RAS, KRAS, or NRAS (pg. 50 lines 4-6), while ‘466 claim 1 recites receiving multiple independently generated patient records with a condition specific common cancer type identifier (pg. 59 lines 4-9). Claim 1 recites filtering records using biomarker mutation indicators to generate a model input data set (pg. 50 lines 7-9), while ‘466 claim 1 recites filtering records using common cancer type identifiers to generate output data records (pg. 59 lines 13-17). Claim 1 recites an intra-date and an inter-date filter to identify calendar dates and eliminate records not satisfying the filters, as well as filters for identification of records failing to satisfy biomarker mutation indicators for RAS, KRAS, and NRA (pg. 50 lines 10-19), while ‘466 claim 1 recites filtering data to eliminate records not satisfying one or more preliminary filter criteria (pg. 59 lines 10-11). Claim 1 recites using observation data record biomarker mutation indicators to generate a RAS biomarker mutation indicator for a given observation data record (pg. 50 lines 20-26), while ‘466 claim 1 recites using common cancer type identifiers to generate observation data records with shared indicators (pg. 59 lines 16-17). Claim 1 recites providing the model input data set to a machine-learning severity model to generate patient-related severity data using derived RAS biomarker mutation indicators (pg. 50 lines 27-29 and pg. 51 lines 1-2), while ‘466 claim 1 recites providing output data records to a machine-learning based model to generate patient-related severity data using common cancer type identifiers (pg. 59 lines 18-23). Both claims recite receiving records, filtering data for specific cancer indicators to provide subsets to a machine-learning severity model to generate more data. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters and machine-learning severity models vary in the type of information identified, without additional detail of how the filters and models function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 3 recites the computer-implemented method where the intra-date filter configured to identify observation data records with the same date-of-service and one of three combinations of RAS, KRAS, and/or NRAS biomarker mutation indicators (pg. 51 lines 4-24), while ‘466 claim 3 recites the computer-implemented method where pre-processing processes that select for one of three types of cancer type identifiers (pg. 60 lines 14-29). All claims recite filtering for specific indicators. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters vary in the type of information the identified, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 4 recites the computer-implemented method where the inter-date filter configured to identify observation data records with the same date-of-service and one of three combinations of RAS, KRAS, and/or NRAS biomarker mutation indicators (pg. 52 lines 1-21), while ‘466 claim 3 recites the computer-implemented method where pre-processing processes that select for one of three types of cancer type identifiers (pg. 60 lines 14-29). All claims recite filtering for specific indicators. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters vary in the type of information the identified, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 5 recites the computer-implemented method generating a model data input set using a colon or rectal cancer pre-processing methodology selected in part based on the plurality of observation data records used to generate the model data input set (pg. 52 lines 22-27), while ‘466 claim 2 recites the computer-implemented method outputting a model input data set using a pre-processing process applicable to cancer type identifiers (pg. 59 lines 24-30). While Claim 5 recites selecting processes for colon or rectal cancer and claim 2 recites selecting processes for cancer in general, the act of using information from data records to evaluate and select an appropriate process is mere data gathering; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 6 recites the computer-implemented method where two unique filters are configured to eliminate records both based on calendar date and RAS, KRAS, or NRAS (pg. 52 line 28 and pg. 53 lines 1-6), while ‘466 claim 6 recites the computer-implemented method where two unique filters are configured to exclude records both based on calendar dates and cancer type identifiers (pg. 61 lines 12-19). While the data themselves differ between claims, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because both Claim 7 (pg.53 lines 6-15) and ‘466 claim 4 (pg. 61 lines 1-9) recite the computer-implemented method where preliminary filter is based on one or more of three criteria: date range, data source, and data content. Observation data records meeting criteria are then provided to process used to generate the model input data set. While the claim bodies themselves are the same, the independent claims on which they depend ((Claim 1 pg. 50 lines 1-29 through pg. 51 lines 1-2) and (claim 1 pg. 59 lines 1-22) respectively) recite observation records comprising different data types; therefore the claims are not identical. While the preambles differ between claims, the bodies are identical; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because both Claim 8 (pg. 53 lines 16-17) and ‘466 claim 5 (pg.61 lines 10-11) recite the computer-implemented method with a linear regression machine-learning severity model. While the claim bodies themselves are the same, the independent claims on which they depend ((Claim 1 pg. 50 lines 1-29 through pg. 51 lines 1-2) and (claim 1 pg. 59 lines 1-22) respectively) recite observation records comprising different data types; therefore the claims are not identical. While the preambles differ between claims, the bodies are identical; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 9 recites one or more memory storage areas and one or more processors for automatically modelling colon or rectal cancer severity attributes using multiple independently generated patient data records (pg. 53 lines 18-21), while ‘466 claim 9 recites one or more memory storage areas and one or more processors for automatically modeling severity but for cancer in general, not specifically colon or rectal cancer (pg. 62 lines 1-4). Claim 9 recites receiving multiple independently generated patient records comprising a biomarker mutation indicator for RAS, KRAS, or NRAS (pg. 53 lines 22-24), while ‘466 claim 9 recites receiving multiple independently generated patient records with a condition specific common cancer type identifier (pg. 62 lines 5-6). Claim 9 recites filtering records using biomarker mutation indicators to generate a model input data set (pg. 53 lines 25-26), while ‘466 claim 9 recites filtering records using common cancer type identifiers to generate output data records (pg. 62 lines 7-8). Claim 9 recites an intra-date and an inter-date filter to identify calendar dates and eliminate records not satisfying the filters, as well as filters for identification of records failing to satisfy biomarker mutation indicators for RAS, KRAS, and NRA (pg. 53 line 27 and pg. 54 lines 1-10), while ‘466 claim 9 recites filtering data to eliminate records not satisfying one or more preliminary filter criteria (pg. 62 lines 11-13). Claim 9 recites using observation data record biomarker mutation indicators to generate a RAS biomarker mutation indicator for a given observation data record (pg. 54 lines 11-17), while ‘466 claim 9 recites using common cancer type identifiers to generate observation data records with shared indicators (pg. 62 lines 14-19). Claim 9 recites providing the model input data set to a machine-learning severity model to generate patient-related severity data using derived RAS biomarker mutation indicators (pg. 54 lines 18-22), while ‘466 claim 9 recites providing output data records to a machine-learning based model to generate patient-related severity data using common cancer type identifiers (pg. 62 lines 21-25). Both claims recite receiving records, filtering data for specific cancer indicators to provide subsets to a machine-learning severity model to generate more data. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters and machine-learning severity models vary in the type of information identified, without additional detail of how the filters and models function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 10 recites the system where the intra-date filter is configured to identify observation data records with the same date-of-service and one of three combinations of RAS, KRAS, and/or NRAS biomarker mutation indicators (pg. 54 lines 23-29 and pg. 55 lines 1-14), while ‘466 claim 11 recites the system where pre-processing processes that select for one of three types of cancer type identifiers (pg. 63 lines 15-31). All claims recite filtering for specific indicators. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters vary in the type of information the identified, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 11 recites the system where the inter-date filter is configured to identify observation data records with the same date-of-service and one of three combinations of RAS, KRAS, and/or NRAS biomarker mutation indicators (pg. 55 lines 15-29 and pg. 56 lines 1-6), while ‘466 claim 11 recites the system where pre-processing processes that select for one of three types of cancer type identifiers (pg. 63 lines 15-31). All claims recite filtering for specific indicators. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters vary in the type of information the identified, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 12 recites the system where two unique filters are configured to eliminate records both based on calendar date and RAS, KRAS, or NRAS (pg. 56 lines 7-12), while ‘466 claim 14 recites the system where two unique filters are configured to exclude records both based on calendar dates and cancer type identifiers (pg. 64 lines 12-19). While the data themselves differ between claims, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because both Claim 13 (pg. 56 lines 13-22) and ‘466 claim 12 (pg. 64 lines 1-9) recites the system where preliminary filter is based on one or more of three criteria: date range, data source, and data content. Observation data records meeting criteria are then provided to process used to generate the model input data set. While the claims themselves are the same, the independent claims on which they depend ((Claim 9 pg. 53 lines 18-27 through pg.54 lines 1-22) and (claim 9 pg. 62 lines 1-25) respectively) recite observation records comprising different data types; therefore the claims are not identical. While the preambles differ between claims, the bodies are identical; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because both Claim 14 (pg. 56 lines 23-24) and ‘466 (pg. 64 lines 10-11) claim 13 recite the system with a linear regression machine-learning severity model. While the claims themselves are the same, the independent claims on which they depend ((Claim 9 pg. 53 lines 18-27 through pg.54 lines 1-22) and (claim 9 pg. 62 lines 1-25) respectively) recite observation records comprising different data types; therefore the claims are not identical. While the preambles differ between claims, the bodies are identical; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 15 recites a computer program product for automatically modeling severity attributes of a colon cancer or rectal cancer … comprising at least one non-transitory computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions configured to one or more memory storage areas and one or more processors” (pg. 56 lines 25-28 and pg. 57 lines 1-2), while ‘466 claim 15 recites one or more memory storage areas and one or more processors for automatically modeling severity but for cancer in general, not specifically colon or rectal cancer (pg. 64 lines 20-24). Claim 15 recites receiving multiple independently generated patient records comprising a biomarker mutation indicator for RAS, KRAS, or NRAS (pg. 57 lines 3-5), while ‘466 claim 15 recites receiving multiple independently generated patient records with a condition specific common cancer type identifier (pg. 64 lines 25-28 through pg. 65 lines 1-2). Claim 15 recites filtering records using biomarker mutation indicators to generate a model input data set (pg. 57 lines 6-7, while ‘466 claim 15 recites filtering records using common cancer type identifiers to generate output data records (pg. 65 lines 6-11). Claim 15 recites an intra-date and an inter-date filter to identify calendar dates and eliminate records not satisfying the filters, as well as filters for identification of records failing to satisfy biomarker mutation indicators for RAS, KRAS, and NRA (pg. 57 lines 8-14), while ‘466 claim 15 recites filtering data to eliminate records not satisfying one or more preliminary filter criteria (pg. 65 lines 3-5). Claim 15 recites using observation data record biomarker mutation indicators to generate a RAS biomarker mutation indicator for a given observation data record (pg. 57 lines 15-18), while ‘466 claim 15 recites using common cancer type identifiers to generate observation data records with shared indicators (pg. 65 lines 9-11). Claim 15 recites providing the model input data set to a machine-learning severity model to generate patient-related severity data using derived RAS biomarker mutation indicators (pg. 57 lines 19-23), while ‘466 claim 15 recites providing output data records to a machine-learning based model to generate patient-related severity data using common cancer type identifiers (pg. 65 lines 12-17). Both claims recite receiving records, filtering data for specific cancer indicators to provide subsets to a machine-learning severity model to generate more data. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters and machine-learning severity models vary in the type of information identified, without additional detail of how the filters and models function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 16 recites the computer program product where the intra-date filter configured to identify observation data records with the same date-of-service and one of three combinations of RAS, KRAS, and/or NRAS biomarker mutation indicators (pg. 58 lines 1-21), while ‘466 claim 17 recites the computer program product where pre-processing processes that select for one of three types of cancer type identifiers (pg. 66 lines 12-24). All claims recite filtering for specific indicators. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters vary in the type of information the identified, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 17 recites the computer program product where the intra-date filter configured to identify observation data records with the same date-of-service and one of three combinations of RAS, KRAS, and/or NRAS biomarker mutation indicators (pg. 58 lines 22-30 and lines 1-12), while ‘466 claim 17 recites the computer program product where pre-processing processes that select for one of three types of cancer type identifiers (pg. 66 lines 12-24). All claims recite filtering for specific indicators. Selecting a particular type of data to be manipulated is an insignificant extra solution that does not render the claims distinct. While the filters vary in the type of information the identified, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim 18 recites the computer program product where two unique filters are configured to eliminate records both based on calendar date and RAS, KRAS, or NRAS (pg. 58 lines 13-18), while ‘466 claim 20 recites the computer program product where two unique filters are configured to exclude records both based on calendar dates and cancer type identifiers (pg. 67 lines 6-13) . While the data themselves differ between claims, without additional detail of how the filters function, they are generic processes; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because both Claim 19 (pg. 58 lines 19-29) and ‘466 (pg. 66 lines 25-30) claim 18 recites the computer program product where preliminary filter is based on one or more of three criteria: date range, data source, and data content. Observation data records meeting criteria are then provided to process used to generate the model input data set. While the claims themselves are the same, the independent claims on which they depend ((Claim 15 pg. 56 lines 25-28 through pg. 57 lines 1-29) and (claim 15 pg. 64 lines 20-28 through pg. 65 lines 1-17) respectively) recite observation records comprising different data types; recite observation records comprising different data types; therefore the claims are not identical. While the preambles differ between claims, the bodies are identical; therefore the claims are patentably indistinct from each other. Although the claims at issue are not identical, they are not patentably distinct from each other because both Claim 20 (pg. 60 lines 1-2) and ‘466 (pg. 67 lines 6-13) claim 19 recite the computer program product with a linear regression machine-learning severity model. While the claims themselves are the same, the independent claims on which they depend ((Claim 15 pg. 56 lines 25-28 through pg. 57 lines 1-29) and (claim 15 pg. 64 lines 20-28 through pg. 65 lines 1-17) respectively) recite observation records comprising different data types; therefore the claims are not identical. While the preambles differ between claims, the bodies are identical; therefore the claims are patentably indistinct from each other. Response to Arguments Applicant requests that the double patenting rejection be held in abeyance until other issues are resolved. As such, Applicant has not presented any substantive arguments against the double patenting rejections. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEENAN NEIL ANDERSON-FEARS whose telephone number is (571)272-0108. The examiner can normally be reached M-Th, alternate F, 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at 571-272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.N.A./Examiner, Art Unit 1687 /OLIVIA M. WISE/Supervisory Patent Examiner, Art Unit 1685
Read full office action

Prosecution Timeline

Jun 10, 2021
Application Filed
Feb 18, 2025
Non-Final Rejection — §101, §103, §DP
Apr 23, 2025
Examiner Interview Summary
Apr 23, 2025
Applicant Interview (Telephonic)
May 27, 2025
Response Filed
Sep 05, 2025
Final Rejection — §101, §103, §DP
Jan 12, 2026
Request for Continued Examination
Jan 15, 2026
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592298
Hardware Execution and Acceleration of Artificial Intelligence-Based Base Caller
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
6%
Grant Probability
56%
With Interview (+50.0%)
5y 1m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month