Prosecution Insights
Last updated: April 19, 2026
Application No. 18/103,765

DATA PROCESSING AND ERROR DETECTION AND CORRECTION FOR ARTIFICIAL INTELLIGENCE SYSTEMS

Non-Final OA §103§DP
Filed
Jan 31, 2023
Examiner
PARK, GRACE A
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Evicore Healthcare Msi LLC
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
421 granted / 557 resolved
+20.6% vs TC avg
Strong +18% interview lift
Without
With
+18.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
23 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Election/Restriction Claims 13-27 are withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected invention Group II, there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on December 30 , 2025. The applicant disagrees that Group II is directed to “using trained machine learning models to make predictions about patients, medical services, and/or medical treatments.” However, independent claim 13 recites loading 3 trained machine learning models with input variables and generat ing output variables , and claims 14-27 depending theref rom go into further detail about the type of input and output such as entities, scores, services, treatment regimens, various therapies , predicted future treatment regimens, probabilities of continuing/ restarting/ discontinuing current treatment regimen, cost estimates for treatment regimens, etc. Group I is directed to sampling training data, then using the sampled training data to optimize baseline hyperparameters of a baseline machine learning model, then training the model using the optimized hyperparameters ; this is different from using 3 already trained models to provide various outputs related to treatment regimens as in Group II. Searches for sampling training data, then optimizing and training a single machine learning model would not encompass loading and using 3 already trained models to provide various outputs about treatment regimens. Thus, there would also be a serious search burden. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg , 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman , 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi , 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum , 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel , 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington , 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA/25, or PTO/AIA/26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer . Claim s 1 , 4-9, 11, and 12 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim s 1 and 6-14 of co-pending application 18103722; claims 1 and 4-9 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4 and 7 of co-pending application 18103672; claims 1 and 4-9 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4 and 7 of co-pending application 18103806. Although the claims at issue are not identical, they are not patentably distinct from each other as shown below: Instant App. 18103765 Co-Pending App. 18103722 Co-Pending App. 18103672 Co-Pending App. 18103806 Claim 1. A non-transitory computer-readable medium comprising executable instructions for training and optimizing machine learning models, wherein the executable instructions include: loading a training data set, wherein the training data set includes a first bin and a second bin; applying an under-sampling technique to elements of the first bin to generate an updated first bin; applying an over-sampling technique to elements of the second bin to generate an updated second bin; generating an updated training data set by merging the updated first bin and the updated second bin; loading baseline hyperparameters; configuring a machine learning model with the baseline hyperparameters; providing the updated training data set as inputs to the machine learning model configured with the baseline hyperparameters to determine baseline performance metrics; determining whether the baseline performance metrics are above a threshold; in response to determining that the baseline performance metrics are above the threshold, saving the baseline hyperparameters as optimal hyperparameters; configuring the machine learning model with optimal hyperparameters; and providing input variables to the machine learning model configured with the optimal hyperparameters to generate output variables. Claim 4 Claim 5 Claim 6 Claim 7 Claim 8 Claim 9 Claim 1 1 Claim 12 Claim 1. A computer-implemented method comprising: loading a training data set, wherein the training data set includes a first bin and a second bin … applying an under-sampling technique to the elements of the first bin to generate an updated first bin; applying an over-sampling technique to the elements of the second bin to generate an updated second bin; generating an updated training data set by merging the updated first bin and the updated second bin … loading baseline hyperparameters; configuring a machine the trained machine learning model with the baseline hyperparameters; providing the updated training data set as inputs to the trained machine learning model configured with the baseline hyperparameters to determine baseline performance metrics; determining whether the baseline performance metrics are above a threshold; in response to determining that the baseline performance metrics are above the threshold, saving the baseline hyperparameters as optimal hyperparameters; configuring the trained machine learning model with optimal hyperparameters; and providing input variables to the trained machine learning model configured with the optimal hyperparameters to generate output variables. Claim 6 Claim 7 Claim 8 Claim 9 Claim 10 Claim 11 Claim 13 Claim 14 Claim 1. A computer-implemented method comprising: … loading a training data set, the training data set including a first bin and a second bin, applying an under-sampling technique to elements of the first bin to generate an updated first bin, applying an over-sampling technique to elements of the second bin to generate an updated second bin, generating an updated training data set by merging the updated first bin and the updated second bin … Claim 3. … loading baseline hyperparameters; configuring the trained machine learning model with the baseline hyperparameters; running the configured machine learning model to determine baseline metrics; and [implied based on subsequent limitation] in response to the baseline metrics being above a threshold, saving the baseline hyperparameters as the optimal hyperparameters. Claim 2. … configuring the trained machine learning model with the optimal hyperparameters. Claim 1. … providing input variables to the trained machine learning model to generate output variables. Claim 4 Claim 4 Claim 4 Claim 4 Claim 4 Claim 7 N/A N/A 1. A system comprising: memory hardware configured to store instructions; and instructions, and processing hardware configured to execute the instructions, wherein the instructions include: … loading a training data set, the training data set including a first bin and a second bin, applying an under-sampling technique to elements of the first bin to generate an updated first bin, applying an over-sampling technique to elements of the second bin to generate an updated second bin, generating an updated training data set by merging the updated first bin and the updated second bin … Claim 3. …… loading baseline hyperparameters; configuring the trained machine learning model with the baseline hyperparameters; running the configured machine learning model to determine baseline metrics; and [implied based on subsequent limitation] in response to the baseline metrics being above a threshold, saving the baseline hyperparameters as the optimal hyperparameters. Claim 2. … configuring the trained machine learning model with the optimal hyperparameters. Claim 1. … providing the input variables to the trained machine learning model to generate output variables … Claim 4 Claim 4 Claim 4 Claim 4 Claim 4 Claim 7 N/A N/A This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over Walters et al. (US Pub. 20200012900 ) in view of Jin et al. (WO 2020220544 A1, translation provided) . Referring to claim 1, Walter discloses A non-transitory computer-readable medium comprising executable instructions for training and optimizing machine learning models [ fig. 1; pa rs. 39-43; computing environment 100 comprises computing devices/systems (e.g., computing resources 101 and model optimizer 107) configured to manage training of data models] , wherein the executable instructions include: loading a training data set, wherein the training data set includes a first bin and a second bin [par. 188; model input data comprising a plurality of model input datasets is received] ; … generating an updated training data set … [par. 189; baseline synthetic data is generated based on the model input data] ; loading baseline hyperparameters [par. 190; a baseline model has baseline hyperparameters] ; configuring a machine learning model with the baseline hyperparameters [par. 190; note the baseline hyperparameters] ; providing the updated training data set as inputs to the machine learning model configured with the baseline hyperparameters to determine baseline performance metrics [pars. 190 and 191; the synthetic data is used to generate the baseline model having the baseline hyperparameters, and a baseline data metric associated with the baseline synthetic data is determined] ; determining whether the baseline performance metrics are above a threshold [pars. 190 , 197 , and 198 ; generating the baseline model may include tuning (i.e., optimizing) the baseline hyperparameters; the tuning is terminated if a performance metric (e.g., the baseline data metric) meets or exceeds a threshold , which entails determining whether the performance metric exceeds the threshold ] ; in response to determining that the baseline performance metrics are above the threshold, saving the baseline hyperparameters as optimal hyperparameters [ 190, 197, and 198 ; upon terminating the tuning, the updated (i.e., optimal) hyperparameters are stored in memory; note that the updated hyperparameters would be the baseline hyperparameters if the tuning is terminated without adjusting the hyperparameters because the baseline hyperparameters already exceed the threshold ] ; configuring the machine learning model with optimal hyperparameters [ 190, 197, and 198 ; generating the baseline model includes the tuning of the baseline hyperparameters] ; and providing input variables to the machine learning model configured with the optimal hyperparameters to generate output variables [ par. 67; a production environment can be configured to use previously trained data models to process received data, which means that the baseline (i.e., a previously trained data model) having the baseline hyperparameters (i.e., with tuned baseline hyperparameters) is provided with input (i.e., the received data) to generate output (i.e., processed data) ] . Walters does not appear to explicitly disclose applying an under-sampling technique to elements of the first bin to generate an updated first bin ; applying an over-sampling technique to elements of the second bin to generate an updated second bin ; generating the updated training data set by merging the updated first bin and the updated second bin . However, Jin discloses applying an under-sampling technique to elements of the first bin to generate an updated first bin ; applying an over-sampling technique to elements of the second bin to generate an updated second bin ; generating the updated training data set by merging the updated first bin and the updated second bin [bottom of pg. 8; a server under-samples majority data and over-samples minority data, then combines the first sample data and the second sample data in a preset ratio to obtain balanced data]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the generating of the baseline data model taught by Walters so that the baseline data model is trained using sampled data as taught by Jin, with a reasonable expectation of success. The motivation for doing so would have been to obtain balance d data that has diversity, which would improve the effect of training and the accuracy of final results [Jin, bottom of pg. 8]. Referring to claim 2, Walters discloses The non-transitory computer-readable medium of claim 1 wherein the input variables include non-standard identifiers of conditions [ par s . 108, 110 , and 128; the input can be an actual (unnormalized) dataset , where unnormalized refers to mismatches in a schema (e.g., columns, column types, column categories, or numeric ranges) ] . Referring to claim 3, Walters discloses The non-transitory computer-readable medium of claim 2 wherein the output variables include standard identifiers of the conditions [pars. 108, 110 , and 128; the baseline data model can be a current data model configured to generate synthetic data similar to (normalized) reference data ] . Referring to claim 4, Walters discloses The non-transitory computer-readable medium of claim 1 wherein the instructions include, in response to determining that the baseline metrics are not above the threshold, adjusting the baseline hyperparameters [pars. 190, 197, and 198; the tuning of the baseline hyperparameters includes iteratively adjust ing the baseline hyperparameters until the performance metric meets or exceeds the threshold (i.e., when the baseline hyperparameters are below the threshold ) ] . Referring to claim 5, Walters discloses The non-transitory computer-readable medium of claim 4 wherein the instructions include configuring the machine learning model with the adjusted hyperparameters [ 190, 197, and 198; note that the generating of the baseline model includes the tuning of the baseline hyperparameters] . Referring to claim 6, Walters discloses The non-transitory computer-readable medium of claim 5 wherein the instructions include providing the training data set as inputs to the machine learning model configured with the adjusted hyperparameters to determine updated performance metrics [pars. 190, 197, and 198; note the iterative adjusting of the baseline hyperparameters until the performance metric meets or exceeds the threshold]. Referring to claim 7, Walters discloses The non-transitory computer-readable medium of claim 6 wherein the instructions include determining whether the updated performance metrics are more optimal than the baseline performance metrics [ pars. 190, 197, and 198; note that the updated (i.e., optimal) hyperparameters are stored in memory once the tuning is terminated upon determining that the performance metric meets or exceeds the threshold (i.e., the performance metric goes from below the threshold to meeting or exceeding the threshold)] . Referring to claim 8, Walters discloses The non-transitory computer-readable medium of claim 7 wherein the instructions include, in response to determining that the updated performance metrics are more optimal than the baseline performance metrics, saving the adjusted hyperparameters as the baseline hyperparameters [pars. 190, 197, and 198; note that the updated (i.e., optimal) hyperparameters are stored in memory once the tuning is terminated as part of the tuning of the baseline hyperparameters] . Claim s 9-12 are rejected under 35 U.S.C. 103 as being unpatentable over Walters and Jin in view of Pal et al. (US Pub. 20250014740 ). Referring to claim 9, Walters does not appear to explicitly disclose The non-transitory computer-readable medium of claim 8 wherein the machine learning model is a light gradient-boosting machine (LightGBM) classifier model. However, Pal discloses The non-transitory computer-readable medium of claim 8 wherein the machine learning model is a light gradient-boosting machine (LightGBM) classifier model [par. 115; classifiers implemented by a radiology protocol recommendation system may include light GBM classifiers] . It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the generating of the baseline data model taught by Walters so that the baseline data model is a light GBM classifier as taught by Pal, with a reasonable expectation of success. The motivation for doing so would have been to provide automatic recommendations in an efficiently scalable manner [Pal, pars. 2 and 4]. Referring to claim 10, Walters discloses The non-transitory computer-readable medium of claim 9 wherein the output variables include (i) standard treatment regimens and (ii) confidence levels for the standard treatment regimens [par. 59; a classifier outputs a prediction of a most probable class to which the input data (e.g., an imaging examination order) belongs, the class mapping to a protocol recommendation and a probability that the protocol recommendation satisfies the input data] . Referring to claim 11, Walters discloses The non-transitory computer-readable medium of claim 10 wherein the input variables are stored on one or more storage devices [fig. 4; pars. 62, 67, 69, and 70; the production environment receives the data via a file system for interfacing between one or more production instances and a data source, where the data source stores data in the file system, and the one or more production instances retrieve the stored data from the file system for processing] . Referring to claim 12, Walters discloses The non-transitory computer-readable medium of claim 11 wherein the machine learning model is configured to access the input variables via one or more networks [fig. 4; pars. 62, 67, 69, and 70; note the retrieving of the stored data by the one or more production instances; this can occur via a distributed or cloud environment] . Conclusion The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Lee et al. (US Pub. 20230297830) discloses optimizing baseline hyperparameters. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT GRACE PARK whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-7727 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 8AM-5PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT TAMARA KYLE can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571)272-4241 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Grace Park/ Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Jan 31, 2023
Application Filed
Mar 27, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591807
SKETCHED AND CLUSTERED FEDERATED LEARNING WITH AUTOMATIC TUNING
2y 5m to grant Granted Mar 31, 2026
Patent 12585924
CAUSAL MULTI-TOUCH ATTRIBUTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585728
METHOD AND APPARATUS FOR MACHINE LEARNING BASED INLET DEBRIS MONITORING
2y 5m to grant Granted Mar 24, 2026
Patent 12579150
Hybrid and Hierarchical Multi-Trial and OneShot Neural Architecture Search on Datacenter Machine Learning Accelerators
2y 5m to grant Granted Mar 17, 2026
Patent 12579431
METHOD AND SYSTEM FOR MACHINE LEARNING BASED UNDERSTANDING OF DATA ELEMENTS IN MAINFRAME PROGRAM CODE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
94%
With Interview (+18.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month