Prosecution Insights
Last updated: April 19, 2026
Application No. 17/945,493

INTERACTIVE SYSTEM TO ASSIST A USER IN BUILDING A MACHINE LEARNING MODEL

Non-Final OA §101§103§112
Filed
Sep 15, 2022
Examiner
COLEMAN, PAUL
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
The Dun And Bradstreet Corporation
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
7 granted / 10 resolved
+15.0% vs TC avg
Strong +43% interview lift
Without
With
+42.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
23 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
36.3%
-3.7% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on May 10, 2023 are determined to be in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements were considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim Rejections - 35 USC § 112(b) Claims 1, 7, 13 dependent therefrom are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention. Claim 1 recites, in relevant part (emphasis added): “generating an optimized parameter space using Bayesian optimization … wherein an optimized parameter set includes training data … and testing data …” The claim also alternates between “optimized parameter space” and “optimized parameter set” within the same sub-step, without clarifying their relationship. Under the broadest reasonable interpretation consistent with the specification (BRI), a parameter space is the set (often a subset of Euclidean space) of possible parameter values for a model, and a parameter set is a particular selection of hyperparameter values drawn from that space. The specification expressly supports this distinction: “parameter space 350F, which contains the parameter space of possible parameter values that define a particular machine learning model, e.g., a subset of finite-dimensional Euclidean space;” (Spec. ¶[0099]) During “operation 610 … parametric search 215 generates an optimized parameter set based on parameter space 350F and trains the model on it.” (Spec. ¶[0137]) Accordingly, a POSITA would understand that training/testing data are inputs for model training/validation, not members of a “parameter set” (i.e., the hyperparameter tuple). Reciting that “an optimized parameter set includes training data … and testing data” introduces an internal inconsistency in the metes and bounds of the limitation, as it is unclear whether the claim requires (i) a set of hyperparameters, (ii) the datasets, or (iii) some hybrid object. The additional alternation between “optimized parameter space” and “optimized parameter set” within the same step further obscures scope (are both required, or is “space” a misnomer for “set”?). As such, the limitation is vague/indefinite as to what is meant, rendering claims 1, 7, and 13 indefinite. Carried to dependents: Claims 2-6, 8-12, and 14-18 depend from claims 1, 7, and 13 respectively, and therefore inherit the ambiguity. These dependent claims are rejected under §112(b) for the same reasons. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Regarding Claim 1 Claim 1 is rejected under 35 U.S.C. § 101 because the claim is directed to an abstract idea without significantly more. Claim 1 – Step 2A Prong One Claim 1 recites the following abstract ideas: “generating an optimized parameter space using Bayesian optimization approach for said parameter space, wherein an optimized parameter set includes training data from said training dataset, and testing data from said testing dataset;” – this limitation recites a mathematical concept under MPEP § 2106.04(a)(2) because Bayesian optimization is a mathematical optimization technique comprising mathematical relationships and calculations. “calculating Kolmogorov-Smirnov (KS) statistics for said model results;” – this limitation recites a mathematical concept under MPEP § 2106.04(a)(2) as it expressly requires a mathematical calculation (KS statistic). Claim 1 – Step 2A Prong Two – Do the additional elements integrate the exception into a practical application? No. The additional elements: “receiving a training dataset, a testing dataset, a number of iterations, and a parameter space of possible parameter values that define a base model;” – this limitation recites gathering data for later use, which constitutes insignificant pre-solution activity under MPEP § 2106.05(g). “for said number of iterations, performing a parametric search process that produces a report that includes information concerning a plurality of machine learning models” – this limitation recites program flow that constitutes mere instructions to apply the abstract idea, not a technological improvement. MPEP § 2106.05(f). The limitation also recites insignificant extra-solution activity, ‘produces a report that includes information …’, that does not improve computer functionality, use a particular machine, or effect a transformation. MPEP § 2106.05(g). “running said base model with said optimized parameter set, thus yielding model results for said plurality of machine learning models;” – this limitation merely invokes execution of a model that applies mathematics and obtains/presents the model’s results. This amounts to mere instructions to apply the abstract idea and insignificant pre-/post-solution activity (data gathering/presentation) (see MPEP § 2106.05(g), § 2106.05(f)). “saving said model results and said KS statistics to said report;” – this limitation recites insignificant extra-solution activities under MPEP 2106.05(g). “and sending said report to a user device.” – this limitation recites mere instructions to apply the abstract idea under MPEP § 2106.05(f). Claim 1 – Step 2B- Do the additional elements amount to “significantly more” than the judicial exception? No. When considered individually and in combination, the additional elements are well-understood, routine, and conventional (WURC) activities. The additional elements: “receiving a training dataset, a testing dataset, a number of iterations, and a parameter space of possible parameter values that define a base model;” – this recites well-understood, routine, and conventional (WURC) data intake implemented on generic components, as evidenced by the specification’s description of ordinary data inputs and generic computer/user-device/network elements (spec. ¶[0136]). As such, this limitation does not add an inventive concept. “for said number of iterations, performing a parametric search process that produces a report that includes information concerning a plurality of machine learning models” – this recites well-understood, routine, and conventional (WURC) program control and report generation performed using conventional computing, with no particular machine, transformation, or improvement to computer functionality. “running said base model with said optimized parameter set, thus yielding model results for said plurality of machine learning models;” – this limitation merely invokes execution of a model and obtains its results, implemented on generic computer components. It is well-understood, routine, and conventional (WURC) program execution and data processing that does not add an inventive concept or improve computer functioning. See Spec. ¶[0136] (generic computer/user-device/network), ¶[0043] (report storage/presentation), ¶[0037]-[0038] (generic transmission/user device). “saving said model results and said KS statistics to said report;” – this limitation constitutes generic output/presentation, which is well-understood, routine, and conventional (WURC) storage/presentation performed in a conventional manner using generic memory/report structures that does not add an inventive concept. (See spec. ¶[0043]). “and sending said report to a user device.” – transmission using generic networking/user-device functionality (see spec. ¶¶[0037], [0038]) is well-understood, routine, and conventional (WURC) and adds no unconventional mechanism or inventive concept. Regarding Claim 2 Claim 2 is rejected under 35 U.S.C. § 101 because the claim is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). Claim 2 - Step 2A Prong One The additional limitations. “a coverage table that contains a percentage of non-missing values for every feature in said initial dataset;” – this limitation recites presenting completeness/coverage information (percentages per feature). This constitutes a mental process, (organizing/presenting information). (See MPEP § 2106.04(a)(2)). “a feature importance table which contains significance of important features with a summary of variance inflation factor to check the correlation between continuous variables and summary of Cramer's V statistics to check the correlation between categorical variables;” – this limitation recites ranking/evaluating features and summarizing metrics (VIF, Cramer’s V). Organizing/presenting information constitutes a mental process under MPEP § 2106.04(a)(2). performing a feature-selection process that computes correlations between variables and produces a correlation table that contains correlation values of correlated pairs – this limitation recites organizing and presenting relationships between features (correlation values) in a table. This constitutes a mental process (information evaluation/organization) under MPEP § 2106.04(a)(2). and an interim dataset that contains an interim list of variables – this limitation recites listing/organizing variables for interim use, a mental process under MPEP § 2106.04(a)(2). Claim 2 – Step 2A Prong Two – Do the additional elements integrate the exception into a practical application? No. The additional elements: “obtaining from a user device: an initial dataset; a target variable that contains a name of a dependent variable present in said initial dataset; and optionally, a weight that contains the name of a sample weight variable present in initial dataset;” – this recites pre-solution data gathering and data intake, which constitute as insignificant extra-solution activity under MPEP § 2106.05(g) and mere instructions to apply the exception under MPEP § 2106.05(f). It does not improve computer functionality, is not tied to a particular machine, and effects no transformation (see MPEP § 2106.05(a)-(c)). Claim 2 – Step 2B- Do the additional elements amount to “significantly more” than the judicial exception? No. When considered individually and in combination, the additional elements are well-understood, routine, and conventional (WURC) activities. The additional elements: “obtaining from a user device: an initial dataset; a target variable that contains a name of a dependent variable present in said initial dataset; and optionally, a weight that contains the name of a sample weight variable present in initial dataset;” – this limitation constitutes generic data intake over a generic user-device/computer/network, which is well-understood, routine, and conventional (WURC). The applicant’s own specification describes the input step and the platform with a high level of generality and using generic terms: “Message 310 … includes … an initial dataset 310A … a target variable 310B … and a weight 310C … [optionally] a missing value threshold 310D … and a correlation threshold 310E.” (Spec. ¶[0053]). “System 100 includes a user device 130 and a computer 105 that are communicatively coupled to a network 135.” (Spec. [0036]) “Computer 105 includes a processor 110 and a memory 115 …” (Spec. [0039]) Considering the additional elements individually or as an ordered combination, claim 2 does not add an inventive concept that amounts to significantly more than the abstract mathematics and therefore fails to integrate the exception into a practical application. Regarding Claim 3 Claim 3 is rejected under 35 U.S.C. § 101 because the claim is directed to an abstract idea without significantly more. The claim is dependent on claim 2 which includes an abstract idea (see rejection for claim 2). Claim 3 - Step 2A Prong One Additional limitations that recite a judicial exception: “wherein said interim dataset is a first interim data set,” – this limitation recites organizing data, a mental process, that merely labels/organizes the interim dataset produced by the earlier mathematical processing. (See MPEP § 2106.04(a)(2)(III)). Claim 3 – Step 2A Prong Two – Do the additional elements integrate the exception into a practical application? No. The additional elements: “and wherein said method further comprises: sending said first interim data set to said user device;” – this limitation recites post-solution transmission of results to a user device, which constitutes insignificant extra-solution activity and mere instructions to apply it. See MPEP § 2106.05(g); § 2106.05(f). “and receiving from said user device, a second interim dataset that is a modified version of said first interim dataset.” – this limitation recites data gathering/input before further math is performed, constituting insignificant extra-solution activity and mere instructions to apply it. See § MPEP 2106.05(g); § 2106.05(f). Claim 3 – Step 2B- Do the additional elements amount to “significantly more” than the judicial exception? No. When considered individually and in combination, the additional elements are well-understood, routine, and conventional (WURC) activities. The additional elements: “and wherein said method further comprises: sending said first interim data set to said user device;” – this limitation recites a well-understood, routine, and conventional (WURC) client-server output that the specification itself describes conventionally: “(c) prepares a message 320; and (d) transmits message 320 to user device 130.” (Spec. ¶[0064], ¶[0065]). There is no unconventional mechanism or improvement to computer functionality. “and receiving from said user device, a second interim dataset that is a modified version of said first interim dataset.” – this limitation recites routine receipt of user-modified data, generic messaging, which is well-understood, routine, and conventional (WURC) activity. The specification itself describes this in conventional terms: “Message 330 … includes: (a) interim dataset 330A, which user 101 prepared by either accepting interim dataset 320D, or adjusting or modifying interim dataset 320D …” (Spec. ¶[0078]) Regarding Claim 4 Claim 4 is rejected under 35 U.S.C. § 101 because the claim is directed to an abstract idea without significantly more. The claim is dependent on claim 2 which includes an abstract idea (see rejection for claim 2). Claim 4 – Step 2A Prong One The claim recites an abstract idea: “performing a clustering process that produces: a cluster report that contains feature groupings” – this limitation recites a mental process because it consists of evaluating and grouping features (classification/organization) that can be carried out in the human mind or with pen-and-paper (i.e., observing similarities/differences and judging group membership). It therefore falls within the mental-process category of MPEP § 2106.04(a)(2). “produces: a cluster report that contains feature groupings; and an interim list of variables” – these limitations recite mental processes because they amount to organizing and presenting information (listing grouped features and identifying an interim variable list) derived from the evaluative clustering step. See MPEP § 2106.04(a)(2). Claim 4 – Step 2A Prong Two – Do the additional elements integrate the exception into a practical application? No. The additional elements: “obtaining an interim dataset and a desired quantity of clusters;” – this limitation recites mere data gathering, an insignificant extra-solution activity under MPEP § 2106.05(g). Claim 4 – Step 2B- Do the additional elements amount to “significantly more” than the judicial exception? No. When considered individually and in combination, the additional elements are well-understood, routine, and conventional (WURC) activities. The additional elements: “obtaining an interim dataset and a desired quantity of clusters;” – this step is insignificant post-solution activity (MPEP § 2106.05(g)) and also mere data gathering which the courts have consistently found limitations directed to obtaining and storing information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (WURC). See MPEP § 2106.05(d)(II) “receiving or transmitting data over a network,” “electronic record keeping,” and “storing and retrieving information in memory.” The applicant’s specification describes generic computer components performing routine/ordinary functions. Such components are WURC under Berkheimer v. HP, 881 F.3d 1360 (Fed. Cir. 2018). There exists no unconventional arrangements or improvement in computer functionality and does not recite significantly more than the abstract idea. Regarding Claim 5 Claim 5 is rejected under 35 U.S.C. § 101 because the claim is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). Claim 5 - Step 2A Prong One Additional limitations that recite an abstract idea: None beyond that which has already been identified in Claim 1. Claim 5 – Step 2A Prong Two – Do the additional elements integrate the exception into a practical application? No. The additional elements: “wherein said number of iterations and said parameter space are specified by a user, via said user device” – this limitation recites pre-solution data gathering / user input and mere instructions to apply the abstract idea using a generic user device. It does not improve computer functionality, is not tied to a particular machine in a meaningful way, and does not effect a transformation. It is an insignificant extra-solution activity (pre-solution) under MPEP § 2106.05(g) and § 2106.05(f). Claim 5 – Step 2B- Do the additional elements amount to “significantly more” than the judicial exception? No. When considered individually and in combination, the additional elements are well-understood, routine, and conventional (WURC) activities. The additional elements: “wherein said number of iterations and said parameter space are specified by a user, via said user device” – this limitation reflects routine user initialization and message passing on generic components, as evidenced by applicants’ specification (see Spec. ¶¶[0169], [0093]) and therefore does not add an inventive concept. See MPEP § 2106.05(d) (WURC) and § 2106.05(f) (mere instructions to apply). Considered individually and as an ordered combination, there is no non-conventional arrangement of components or specific improvement to computer operation. Thus, the claim as a whole does not recite significantly more. Regarding Claim 6 Claim 6 is rejected under 35 U.S.C. § 101 because the claim is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). Claim 6 - Step 2A Prong One Additional limitations that recite an abstract idea: None beyond that which has already been identified in Claim 1. Claim 6 – Step 2A Prong Two – Do the additional elements integrate the exception into a practical application? No. The additional elements: “receiving from said user device, a communication that selects one or more of said machine learning models, thus yielding a selected model;” – this is insignificant extra-solution/post-solution activity. Merely receiving a user selection after the mathematical processing is complete. Such transmitting/receiving of information and user selections on generic devices fails to impose a meaningful limit on the abstract idea. See MPEP § 2106.05(g) (insignificant extra-solution activity). “and storing said selected model in a memory device.” – merely storing data (the selected model) is also insignificant extra-solution activity performed by generic computer memory and does not improve the functioning of the computer or another technology. See MPEP § 2106.05(g) and 2106.05(a). Claim 6 – Step 2B- Do the additional elements amount to “significantly more” than the judicial exception? No. When considered individually and in combination, the additional elements are well-understood, routine, and conventional (WURC) activities. The additional elements: “receiving from said user device, a communication that selects one or more of said machine learning models, thus yielding a selected model;” – this limitation recites well-understood, routine, and conventional (WURC) user-interface/network interaction. The applicant’s specification depicts exactly this generic interaction: “In operation 365, user 101 selects a model from model results 360A, and the selected model is stored as model 127 in database 125 … User 101 can input the iteration number … to obtain the selected model.” (See Spec. ¶[0111]). “and storing said selected model in a memory device.” – this constitutes routine data storage using generic memory/database components that are well-understood, routine, and conventional (WURC). The specification confirms routine storage in a conventional database: “Computer 105 is coupled to a database 125 … Database 125 also stores model 127.” (See Spec. ¶[0044]). Regarding claims 7-12 (method claims): Each of claims 7-12 is the method analog of the correspondingly numbered apparatus claim in claims 1-6 (i.e., claim 7 [Wingdings font/0xF3] claim 1; claim 8 [Wingdings font/0xF3] claim 2; claim 9 [Wingdings font/0xF3] claim 3; claim 10 [Wingdings font/0xF3] claim 4; claim 11 [Wingdings font/0xF3] claim 5; claim 12 [Wingdings font/0xF3] claim 6). The method recitations merely express, in step form, the same functional limitations previously analyzed for the apparatus claims and do not add material limitations that change the character of the exception or integrate it into a practical application. Therefore, claims 7-12 are rejected under 35 U.S.C. § 101. Regarding Claim 13 Claim 13 is rejected under 35 U.S.C. § 101 as being directed to a judicial exception (an abstract idea) without significantly more. Claim 13 – Step 2A Prong One – Does the claim recite a judicial exception? Yes. The following claim limitations: “generating an optimized parameter space using Bayesian optimization approach for said parameter space, wherein an optimized parameter set includes training data from said training dataset, and testing data from said testing dataset;” – this limitation recites mathematical optimization over a parameter space (Bayesian optimization), which is a mathematical concept under MPEP § 2106.04(a)(2). “calculating Kolmogorov-Smirnov (KS) statistics for said model results;” – this limitation expressly recites calculating a statical metric (KS), which is a mathematical calculation under MPEP § 2106.04(a)(2). Claim 13 – Step 2A Prong Two – Do the additional elements integrate the exception into a practical application? No. The additional elements: “running said base model with said optimized parameter set, thus yielding model results for said plurality of machine learning models;” – this limitation merely invokes execution of a model that applies mathematics and obtains/presents the model’s results. This amounts to mere instructions to apply the abstract idea and insignificant pre-/post-solution activity (data gathering/presentation) (see MPEP § 2106.05(g), § 2106.05(f)). “and saving said model results and said KS statistics to said report;” – this limitation recites saving data, which represents mere insignificant extra/post-solution output/presentation (writing results to a report) under MPEP § 2106.05(g) and mere instructions to apply the abstract calculations under § 2106.05(f). . “and sending said report to a user device.” – this limitation recites post-solution transmission of results to a user device. This is insignificant extra-solution activity under MPEP § 2106.05(g) and mere instructions to apply it under MPEP § 2106.05(f). Claim 13 – Step 2B - Do the additional elements amount to “significantly more” than the judicial exception? No. When considered individually and in combination, the additional elements are well-understood, routine, and conventional (WURC) activities. The additional elements: “running said base model with said optimized parameter set, thus yielding model results for said plurality of machine learning models;” – this limitation merely invokes execution of a model and obtains its results, implemented on generic computer components. It is well-understood, routine, and conventional (WURC) program execution and data processing that does not add an inventive concept or improve computer functioning. See Spec. ¶[0136] (generic computer/user-device/network), ¶[0043] (report storage/presentation), ¶[0037]-[0038] (generic transmission/user device). “and saving said model results and said KS statistics to said report;” – this limitation recites routine storage/presentation of computed metrics in a table/report that the specification itself describes conventionally (WURC): “The models are built, and key metrics and information such as KS, Gini, 10% and 20% capture rates, parameters and iteration number for each model are captured and stored in a table as a part of model results 360A.” (Spec. ¶[0138]) “and sending said report to a user device.” – this limitation recites routine client-server transmission of a results table to a user device, which is well-understood, routine, and conventional and that the spec itself describes conventionally: “(c) prepares message 360; and (d) transmits message 360 to user device 130.” (Spec., ¶[0106]-¶[0107]) “In operation 620, in message 360, parametric search 215 returns, to user device 130, a table with the information from operation 615.” (Spec. ¶[0139]) Regarding Claims 14-18, the storage device claims are analogous to claims 8-12 and 2-6, respectfully (claim 14 is analogous to claim 8 and claim 2, claim 15 is analogous to claim 9 and claim 3, and so on). Therefore, Claims 14-18 are rejected under 35 U.S.C. § 101 for the same reasons. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Patrick Hayes (US10217061B2), henceforth ‘Hayes’ in view of Chengwen Robert Chu (US8214308B2), henceforth ‘Chu’. Regarding claim 1, Hayes in view of Chu, teach a method comprising: receiving a training dataset, a testing dataset, a number of iterations, and a parameter space of possible parameter values that define a base model – Hayes teaches this limitation. Hayes teaches receiving optimization requests that include a parameter space and a tuning budget (iterations): “receive API requests for implementing new optimization work requests” (Hayes, col. 4, lines 49-50) “constraints may include maximum and minimum values for each of the hyperparameters” (Hayes, col. 4, lines 11-12) “constraints may additional include an optimization (tuning) budget that limits a number of optimization trials” (Hayes, col. 4, lines 15-16) for said number of iterations, performing a parametric search process that produces a report that includes information concerning a plurality of machine learning models, wherein said parametric search process includes – Hayes teaches this limitation. Hayes teaches iterative parametric (hyperparameter) search over a space, generating candidate settings and rankings: “ensemble of Bayesian optimization processes and machine learning techniques that function to automate an optimization or tuning of features (including weights or coefficients of features) of a model, architecture of a model, and hyperparameters” (Hayes, col. 3, lines 52-56) “ranking system 155 that functions to rank the suggestions for a given optimization work request ( or across multiple optimization trials for a given optimization work request) such that the suggestions having parameter values most likely to perform the best can be passed or pulled via the API 105.” (Hayes, col. 9, lines 31-36) generating an optimized parameter space using Bayesian optimization approach for said parameter space – Hayes teaches this limitation. Hayes teaches Bayesian optimization over a parameter space: “ensemble of Bayesian optimization processes” (Hayes, col. 3, lines 52-53) Hayes does not teach: wherein an optimized parameter set includes training data from said training dataset, and testing data from said testing dataset running said base model with said optimized parameter set, thus yielding model results for said plurality of machine learning models calculating Kolmogorov-Smirnov (KS) statistics for said model results and saving said model results and said KS statistics to said report Chu, however, teaches these limitations: wherein an optimized parameter set includes training data from said training dataset, and testing data from said testing dataset – Chu teaches this limitation. Chu teaches explicit, paired training and test data usage: “training data used to develop the model … validation data” (Chu, col. 14, lines 21-22) “compared on blind test data” (Chu, col. 14, line 4) running said base model with said optimized parameter set, thus yielding model results for said plurality of machine learning models – Chu teaches this limitation. Chu teaches running/scoring models and computing stats: “model manager will score the models and compute statistics” (Chu, col. 14, lines 25-26) calculating Kolmogorov-Smirnov (KS) statistics for said model results – Chu teaches this limitation. Chu teaches KS and ksDecay (explicitly): “A Kolmogorov-Smirnov (KS) chart … The maximum vertical difference … is called a KS statistic.” (Chu, col. 7, lines 27-30) and saving said model results and said KS statistics to said report – Chu teaches this limitation. Chu teaches generating and distributing reports of model performance: “running model performance reports, distributing the generated reports” (Chu, col. 16, lines 30-31) Persisting the scored results (including KS) into Chu’s existing reporting objects for auditability and communication would be routine and obvious. and sending said report to a user device – Chu teaches this limitation. Chu teaches a web-based system and distribution of the generated reports: “an integrated web-based reporting and analysis tool” (Chu, col. 2, lines 65-66) “distributing the generated reports” (Chu, col. 16, line 31) A POSITA would have been motivated at the time of the claimed invention to combine Hayes (Bayesian hyperparameter search over a user-defined space with budgeted iterations) with Chu (in-platform training/scoring on train/validation/test splits, KS computation, and report generation/distribution) to form a compete tuning-to-evaluation pipeline. Chu supplies the objective evaluation (including KS) and reporting that Hayes’s search requires to compare candidate configurations; adding KS and packaging trials with the corresponding train/test data are routine, predictable design choices. Both references are networked/API-oriented and contemplate automated, parallel execution and user delivery, so integration would have been a reasonable expectation of success without teaching away (KSR). Regarding claim 2, Hayes in view of Chu, teach the method of claim 1, further comprising, prior to performing said parametric search process: obtaining from a user device: an initial dataset; a target variable that contains a name of a dependent variable present in said initial dataset; and optionally, a weight that contains the name of a sample weight variable present in initial dataset – Hayes does not teach this limitation. Chu teaches this limitation. Chu teaches dataset and target (dependent) variable concepts: “The scoring client forms the MQ data packet and sends it to the MQ server. The scoring server retrieves the MQ data packet, executes the corresponding score code, and posts the output scores to the MQ server. The client then retrieves the indexed output from the MQ server and continues processing.” (Chu, col. 16, lines 2-7) “create one master model input data set 1608 that can serve multiple models 1610.” (Chu, col. 15, lines 57-58) “The training data used to develop the model” (Chu, col. 14, line 21) “The validation data used to control model over-fitting” (Chu, col. 14, line 22) “Definition of the dependent variable (in this case typically, loan default) can have many forms …” (Chu, col. 13, lines 31-32) “the definition of the dependent variable should be consistent over time.” (Chu, col. 13, lines 40-41) and performing a feature selection process that produces: a correlation table that contains correlation values of correlated pairs – Hayes does not teach this limitation. Chu teaches this limitation. Chu teaches correlation analysis as part of feature selection: “Aggressive variable selection, use of exploratory correlation statistics, and variable clustering can be effective in reducing long-term instability.” (Chu, col. 13, lines 57-59) “This generates the following table of responder counts:” (Chu, col. 17, lines 15-16) PNG media_image1.png 200 400 media_image1.png Greyscale “Some colunms in a performance report can be intentionally listed side-by-side for easy cross reference.” (Chu, col. 10, lines 1-2) a coverage table that contains a percentage of non-missing values for every feature in said initial dataset – Hayes does not teach this limitation. Chu teaches this limitation. Chu teaches missing-value handling and report generation: “Filtering outliers and replacing missing values” (Chu, col. 12, line 57) “Provide monitoring trend charts or reports via process 840.” (Chu, col. 6, line 28) a feature importance table which contains significance of important features with a summary of variance inflation factor to check the correlation between continuous variables and summary of Cramer's V statistics to check the correlation between categorical variables – Hayes does not teach this limitation. Chu teaches this limitation. Chu teaches the underlying analyses that drive such summaries: “Aggressive variable selection, use of exploratory correlation statistics, and variable clustering” (Chu, col. 14, 13, lines 46, 57-59) “Validate data distributions” and monitor categorical/continuous drift using information-theoretic indices: “KullBack-Keibler information index (KK index) is used to measure the deviation between two distributions.” (Chu, col. 7, lines 35-36) Selecting particular, well-known correlation/association measures (e.g., VIF for continuous-variable multicollinearity; Cramer’s V for categorical association) is a routine choice within Chu’s expressly stated “exploratory correlation statistics” and distribution-comparison analyses; presenting them in a “feature importance table” is a predictable report format, as Chu already uses tables/reports. and an interim dataset that contains an interim list of variables – Hayes does not teach this limitation. Chu teaches this limitation. Chu teaches intermediate/partial data artifacts and update flows consistent with an interim dataset: “Optionally, a reference model input table 1710 can be used to provide data fields that are not provided by the client application. In that case, the server updates the full data record with the new partial data record 1730” (Chu, col. 16, lines 11-15; Fig. 18) “A model extraction macro can be run before any model performance monitoring reporting macro is run” (Chu, col. 9, lines 34-35) Chu’s “reference model input table” and “new partial data record 1730” are interim constructs that reflect a subset/list of variables prior to full scoring, i.e., an “interim dataset … [and] interim list of variables. Implementing Chu’s pre-modeling data/feature preparation and tabular reporting within Haye’s API-driven optimization pipeline is a predictable, beneficial integration. Hayes seeks to “improve … predictive … performance … while greatly improving an overall … performance of a model” via an API-orchestrated platform, and to “reduce the complexity in creating an optimization work request”. (Hayes, col. 2, 3, lines 60-61, 64-65). Chu teaches the upstream steps practitioners perform before and around model training/selection – “Access and manage the data”, “Develop the model” (including “exploratory correlation statistics”, handling “missing values”, and report/tables), and explicit user-system interactions (“users 32 can interact with the system 34 … over … networks”) (Chu, col. 2, lines 39-40, 61). Integrating Chu’s routine feature-selection/QA outputs (correlation, coverage, importance, interim subsets) before Haye’s tuning is an obvious combination to feed cleaner, vetted features to the optimizer and thereby further Haye’s stated goal of improved model performance with efficient evaluations. Regarding claim 3, Hayes in view of Chu, teach the method of claim 2, wherein said interim dataset is a first interim data set, and wherein said method further comprises: sending said first interim data set to said user device – Hayes does not teach this limitation. Chu teaches this limitation. Chu teaches a web-based system and distribution of the generated reports: “an integrated web-based reporting and analysis tool” (Chu, col. 2, lines 65-66) “distributing the generated reports” (Chu, col. 16, line 31) and receiving from said user device, a second interim dataset that is a modified version of said first interim dataset – Hayes teaches the platform and response, but does not teach receipt of a specifically “modified” interim dataset. Chu teaches this limitation. Chu teaches client-to-server submission of updated/partial data that the server incorporates: “the server updates the full data record with the new partial data record 1730” (Chu, col. 16, lines 14-15; Fig. 18) Chu also establishes the interactive user/system context: “Users 32 can interact with the system 34 through a number of ways, such as over one or more networks 36.” (Chu, col. 2, lines 60-61; Fig. 1) Hayes’s API pipeline “returning an API response” and “receive API requests” enables bidirectional exchange, while Chu provides concrete user-system data handoffs: “distributing the generated reports”, “returns the score … to the client application”, and client-supplied updates where “the server updates the full data record with the new partial data record 1730”. Combining them yields the claimed review loop (send a first interim data set to the user device and receive a second, user-modified version) predictably integrating human-in-the-loop feature curation into Hayes’s optimization flow. Regarding claim 4, Hayes in view of Chu, teach the method of claim 1, further comprising, prior to performing said parametric search process: obtaining … a desired quantity of clusters – Hayes teaches this limitation. Hayes teaches clustering as a process the platform can run: “a clustering method (e.g., k-means clustering, expectation maximization, etc.)” (Hayes, col. 8, lines 33-34) and performing a clustering process that produces: a cluster report that contains feature groupings; and an interim list of variables – Hayes does not teach these limitations. Chu teaches these limitations. Chu teaches generating reports and organized outputs: “a model manager that includes a project-based repository … and also several model comparison and monitoring reports.” (Chu, col. 11, lines 52-55) Chu also teaches grouping features/entities for analysis: “Segmenting customers into groups with common attributes” (Chu, col. 12, line 58) And Chu enumerates explicit model variable lists: “A list of essential input variables required by the model” and “A list of output variables created by the model” (Chu, col. 14, lines 15, 16) Hayes’s API-driven platform supports “a clustering method (e.g., k-means clustering…)” and accepts user-specified settings (“constraints … for each of the hyperparameters”), while Chu provides interim/partial data artifacts (“reference model input table 1710”, “new partial data record 1730”), grouped analysis outputs (“Segmenting customers into groups with common attributes”), and reporting (“… several model comparison and monitoring reports”) plus variable lists (“A list of essential input variables …”). Combining them yields, prior to the parametric search, obtaining an interim dataset and a desired clustering setting, performing clustering, and producing a cluster report with feature groupings and in interim list of variables, with predictable results. Regarding claim 5, Hayes in view of Chu, teach the method of claim 1, wherein said number of iterations and said parameter space are specified by a user, via said user device – Hayes teaches this limitation. Hayes teaches user specified hyperparameters/constraints (i.e., the parameter space): “The hyperparameter optimization work request may include an identification of the hyperparameters a user desires to optimize together with constraints or parameters required for experimenting or performing optimization trials …” (Hayes, col. 3, lines 14-18) “The one or more constraints may include maximum and minimum values for each of the hyperparameters” (Hayes, col. 4, lines 11-12) Hayes teaches user specified number of iterations (optimization trials/budget): “The one or more constraints may additional include an optimization (tuning) budget that limits a number of optimization trials” (Hayes, col. 4, lines 14-16) Hayes teaches this is done “via said user device”: “The intelligent API may be implemented as a client application on a client device, as a web browser, or any suitable interface accessible to a remote user system.” (Hayes, col. 10, lines 5-8) To the extent collaboration is preferred, a POSITA would have found it obvious at the time of the claimed invention to use Chu’s networked user interaction for providing user inputs with Haye’s API-based optimization requests to “reduce the complexity in creating an optimization work request” and let a user supply constraints/limits (e.g., trial budget and bounds) from a client device, consistent with both systems’ stated goal of simplifying user configuration and running optimization over networks. (Hayes, col. 3, lines 64-65) Regarding claim 6, Hayes in view of Chu, teach the method of claim 1, further comprising, after sending said report to said user device: receiving from said user device, a communication that selects one or more of said machine learning models, thus yielding a selected model; - Hayes does not teach this limitation. Chu teaches this limitation. Chu teaches user interaction + selection of champion model: “The users 32 can interact with the system 34 through a number of ways, such as over one or more networks 36” (Chu, col. 2, lines 60-61) “Audit validation processes: … every step of the validation process should be logged. For example, who imported what model when; who selected what model as the champion model, when and why; who reviewed the champion model for compliance purposes; and who published the champion to where and when.” (Chu, col. 15, lines 1-7) and storing said selected model in a memory device – Hayes does not teach this limitation. Chu teaches this limitation. Chu teaches storing models in a repository: “The model manager includes a secure, centralized repository for storing and organizing models.” (Chu, col. 11, lines 44-45) “FIGS. 14A-14B show … a project-based repository for cataloging analytical models” (Chu, col. 11, lines 52-54) “both champion and challenger models should be archived into a model database.” (Chu, col. 14, lines 2-3) Hayes produces and delivers results to users via an API but lacks user model selection and model storage. Chu expressly documents real-world user selection of a champion model (“who selected what model as the champion model”) and storing/archiving models in a centralized repository. It would have been obvious to one of ordinary skill at the time of the claimed invention to incorporate Chu’s standard model-management functions (user selection + repository storage) into Hayes’s optimization/reporting workflow so that a user, after receiving Hayes’s report, can select a model and have the system store that model for deployment and governance, an expected, predictable combination of complementary features consistent with KSR. Regarding claims 7-12 (system claims). Claim 7 is a systems claim directly analogous in scope with method claim 1 but framed in “system” form (i.e., a system comprising at least one processor and a memory storing instructions which, when executed, cause the system to perform the steps). Claim 8 is system-analogous to claim 2; claim 9 is system-analogous to claim 3; claim 10 is system-analogous to claim 4; claim 11 is system-analogous to claim 5; and claim 12 is system-analogous to claim 6. Accordingly, the §103 rejection, references (Hayes as Ref. A and Chu as Ref. B), and motivation to combine set forth for claims apply equally to claim 7; and the §103 rejections set forth for claims 2-6 apply equally to claims 8-12, respectively. Regarding claims 13-18 (storage device). Claim 13 is a non-transitory computer-readable medium (CRM) claim analogous in scope with method claim 1, reciting the same limitations in CRM form (i.e., instructions that, when executed, cause a processor/system to perform the steps). Claim 14 is CRM-analogous to system 8 / method claim 2; claim 15 to claims 9/3; claim 16 to claims 10/4; claim 17 to claims 11/5; and claim 18 to claims 12/6. Accordingly, the § 103 rejection, cited references (Hayes + Chu), and motivation to combine set forth for claim 1 apply equally to claim 13; and the § 103 rejections set forth for claims 8-12 (and thus 2-6) apply equally to claims 14-18, respectively. Note: presenting the same claimed subject matter in system/CRM form does not patentably distinguish over the method claims; the claims are analogous in scope for purposes of the § 103 analysis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL COLEMAN whose telephone number is (571)272-4687. The examiner can normally be reached Mon-Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL COLEMAN/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Sep 15, 2022
Application Filed
Sep 22, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597489
METHOD, DEVICE, AND COMPUTER PROGRAM FOR PREDICTING INTERACTION BETWEEN COMPOUND AND PROTEIN
2y 5m to grant Granted Apr 07, 2026
Patent 12574861
METHOD AND SYSTEM FOR ACCELERATING DISTRIBUTED PRINCIPAL COMPONENTS WITH NOISY CHANNELS
2y 5m to grant Granted Mar 10, 2026
Patent 12443678
STEPWISE UNCERTAINTY-AWARE OFFLINE REINFORCEMENT LEARNING UNDER CONSTRAINTS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.9%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month