Prosecution Insights
Last updated: April 19, 2026
Application No. 18/486,489

SYSTEM AND METHOD FOR GENERATING AND OPTIMIZING ARTIFICIAL INTELLIGENCE MODELS

Non-Final OA §101§102§103
Filed
Oct 13, 2023
Examiner
SMITH, KEVIN LEE
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Actapio Inc.
OA Round
3 (Non-Final)
37%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
49 granted / 134 resolved
-18.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
45 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
30.7%
-9.3% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed 30 September 2025 [hereinafter Response] has been entered, where: Claims 1 and 9 have been amended. Claims 1-16 are pending. Claims 1-16 are rejected. Claim Rejections - 35 U.S.C. § 101 3. 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 1-16 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites a “method,” which is a process, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(d)] generating . . . a first plurality of generation indices based on a plurality of features of the learning data,” “[(f)] determining . . . model accuracy for each of the first plurality of machine learning models,” “[(g)] performing . . . model selection to select models of a predetermined number having highest model accuracy from the first plurality of machine learning models,” “[(h)] performing . . . indices generation to generate a second plurality of generation indices based on a second plurality of features,” “[(i)] performing . . . model accuracy determination to determine model accuracy for each of the second plurality of machine learning models,” “[(k)] iteratively performing . . . model accuracy determination until a machine learning model having a model accuracy that surpasses and accuracy threshold is generated,” and “[(l)] selecting the machine learning model having highest model accuracy from the second plurality of machine learning models for deployment.” These activities of “[(d)] generating,” [(f)] determining,” “[(g), (i)] performing,” “[(k)] iteratively performing,” and “[(l)] selecting,” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are a mental process, (MPEP § 2106.04(a)(2) subsection III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim recites more details or specifics to the abstract idea of “[(h)] performing . . . indices generation,” such that “[(h.1)] wherein the second plurality of features is derived by performing genetic crossover of the features from generation indices that are associated with the models of the predetermined number,” and “[(h.2)] wherein the performing the genetic crossover comprises providing a list of input features to be used at each operation during optimization processing, a number of iterations per trial, and a number of results inherited for subsequent optimization process, and adjusting crossover rates used in the genetic crossover so that features resulting in more accurate models are inherited more frequently in next-generation features,” and accordingly, are merely more specific to the abstract idea. Thus, claim 1 recites an abstract idea. Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “a processor,” which is a generic computer component used to implement the abstract idea that do not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim also recites additional elements of “a first plurality of machine learning models,” and “a second plurality of machine learning models,” which are generic computer components used to implement the abstract idea, and does not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). Also, these limitations generally link the use of the judicial exception to the particular technological environment or field of use pertaining to generation and selection of “models,” (MPEP § 2106.05(h)), that does not serve to integrate the abstract idea into a practical application. The claim also recites limitations of “[(e)] generating a first plurality of machine learning models trained with the learning data and the first plurality of generation indices,” “[(i)] performing machine learning model generation to generate a second plurality of machine learning models trained with the learning data and the second plurality of feature,“ and “[(k)] iteratively performing model selection, indices generation by performing genetic crossover with features from indices of preceding iteration, machine learning model generation, . . .” These limitations recite the use of generic components (a first plurality of machine learning models, a second plurality of machine learning models) to implement the abstract idea, and do not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim also recites more specifics or details of the additional element of “[(e), (i)] generating . . . models trained,” “[(e.1)] wherein each of the first plurality of machine learning models is trained with a respective generation index of the first plurality of generation indices,” and “[(i.1)] wherein each of the second plurality of machine learning models is trained with a unique combination of features from the second plurality of features,” and accordingly, are merely more specific to the additional element. The claim also recites the additional elements of “[(a)] obtaining . . . learning data to be used in machine learning model training,” “[(b)] performing, by the processor, data validation and generating configuration files required for a deep framework,” and [(c)] organizing, by the processor, the learning data for training, evaluation, and testing,” which are pre-processing, insignificant extra-solution activities of mere data gathering and data preparation, (MPEP § 2106.05(g)), that does not integrate the abstract idea into a practical application. The claim also recites more details or specifics to the additional element of “[(b)] performing . . . data validation,” “[(b.1)] wherein the deep framework builds deep learning models for production without requiring generation of additional code,” and accordingly, is merely more specific to the additional element. Therefore, claim 1 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements include “a processor,” which is a generic computer component used to implement the abstract idea that do not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim also recites additional elements of “a first plurality of machine learning models,” and “a second plurality of machine learning models,” which are generic computer components used to implement the abstract idea, and does not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). Also, these limitations generally link the use of the judicial exception to the particular technological environment or field of use pertaining to generation and selection of “models,” (MPEP § 2106.05(h)), that does not amount to significantly more than the abstract idea. The claim also recites limitations of “[(e)] generating a first plurality of machine learning models trained with the learning data and the first plurality of generation indices,” “[(i)] performing machine learning model generation to generate a second plurality of machine learning models trained with the learning data and the second plurality of feature,“ and “[(k)] iteratively performing model selection, indices generation by performing genetic crossover with features from indices of preceding iteration, machine learning model generation, . . .” These limitations recite the use of generic components (a first plurality of machine learning models, a second plurality of machine learning models) to implement the abstract idea, and do not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim also recites more specifics or details of the additional element of “[(e), (i)] generating . . . models trained,” “[(e.1)] wherein each of the first plurality of machine learning models is trained with a respective generation index of the first plurality of generation indices,” and “[(i.1)] wherein each of the second plurality of machine learning models is trained with a unique combination of features from the second plurality of features,” and accordingly, are merely more specific to the additional element. The claim also recites the additional elements of “[(a)] obtaining . . . learning data to be used in machine learning model training,” “[(b)] performing, by the processor, data validation and generating configuration files required for a deep framework,” and [(c)] organizing, by the processor, the learning data for training, evaluation, and testing,” which are well-understood, routine, and conventional activity of storing, formatting, and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. The claim also recites more details or specifics to the additional element of “[(b)] performing . . . data validation,” “[(b.1)] wherein the deep framework builds deep learning models for production without requiring generation of additional code,” and accordingly, is merely more specific to the additional element. Therefore, claim 1 is subject matter ineligible. Claim 9 recites a “non-transitory computer readable medium,” which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(d)] generating . . . a first plurality of generation indices based on a plurality of features of the learning data,” “[(f)] determining . . . model accuracy for each of the first plurality of machine learning models,” “[(g)] performing . . . model selection to select models of a predetermined number having highest model accuracy from the first plurality of machine learning models,” “[(h)] performing . . . indices generation to generate a second plurality of generation indices based on a second plurality of features,” “[(i)] performing . . . model accuracy determination to determine model accuracy for each of the second plurality of machine learning models,” “[(k)] iteratively performing . . . model accuracy determination until a machine learning model having a model accuracy that surpasses and accuracy threshold is generated,” and “[(l)] selecting the machine learning model having highest model accuracy from the second plurality of machine learning models for deployment.” These activities of “[(d)] generating,” [(f)] determining,” “[(g), (i)] performing,” “[(k)] iteratively performing,” and “[(l)] selecting,” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are a mental process, (MPEP § 2106.04(a)(2) subsection III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim recites more details or specifics to the abstract idea of “[(h)] performing . . . indices generation,” such that “[(h.1)] wherein the second plurality of features is derived by performing genetic crossover of the features from generation indices that are associated with the models of the predetermined number,” and “[(h.2)] wherein the performing the genetic crossover comprises providing a list of input features to be used at each operation during optimization processing, a number of iterations per trial, and a number of results inherited for subsequent optimization process, and adjusting crossover rates used in the genetic crossover so that features resulting in more accurate models are inherited more frequently in next-generation features,” and accordingly, are merely more specific to the abstract idea. Thus, claim 9 recites an abstract idea. Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “a processor,” which is a generic computer component used to implement the abstract idea that do not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim also recites additional elements of “a first plurality of machine learning models,” and “a second plurality of machine learning models,” which are generic computer components used to implement the abstract idea, and does not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). Also, these limitations generally link the use of the judicial exception to the particular technological environment or field of use pertaining to generation and selection of “models,” (MPEP § 2106.05(h)), that does not serve to integrate the abstract idea into a practical application. The claim also recites limitations of “[(e)] generating a first plurality of machine learning models trained with the learning data and the first plurality of generation indices,” “[(i)] performing machine learning model generation to generate a second plurality of machine learning models trained with the learning data and the second plurality of feature,“ and “[(k)] iteratively performing model selection, indices generation by performing genetic crossover with features from indices of preceding iteration, machine learning model generation, . . .” These limitations recite the use of generic components (a first plurality of machine learning models, a second plurality of machine learning models) to implement the abstract idea, and do not serve to integrate the abstract idea into a practical application. (MPEP § 2106.05(f)). The claim also recites more specifics or details of the additional element of “[(e), (i)] generating . . . models trained,” “[(e.1)] wherein each of the first plurality of machine learning models is trained with a respective generation index of the first plurality of generation indices,” and “[(i.1)] wherein each of the second plurality of machine learning models is trained with a unique combination of features from the second plurality of features,” and accordingly, are merely more specific to the additional element. The claim also recites the additional elements of “[(a)] obtaining . . . learning data to be used in machine learning model training,” “[(b)] performing, by the processor, data validation and generating configuration files required for a deep framework,” and [(c)] organizing, by the processor, the learning data for training, evaluation, and testing,” which are pre-processing, insignificant extra-solution activities of mere data gathering and data preparation, (MPEP § 2106.05(g)), that does not integrate the abstract idea into a practical application. The claim also recites more details or specifics to the additional element of “[(b)] performing . . . data validation,” “[(b.1)] wherein the deep framework builds deep learning models for production without requiring generation of additional code,” and accordingly, is merely more specific to the additional element. Therefore, claim 9 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements include “a processor,” which is a generic computer component used to implement the abstract idea that do not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim also recites additional elements of “a first plurality of machine learning models,” and “a second plurality of machine learning models,” which are generic computer components used to implement the abstract idea, and does not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). Also, these limitations generally link the use of the judicial exception to the particular technological environment or field of use pertaining to generation and selection of “models,” (MPEP § 2106.05(h)), that does not amount to significantly more than the abstract idea. The claim also recites limitations of “[(e)] generating a first plurality of machine learning models trained with the learning data and the first plurality of generation indices,” “[(i)] performing machine learning model generation to generate a second plurality of machine learning models trained with the learning data and the second plurality of feature,“ and “[(k)] iteratively performing model selection, indices generation by performing genetic crossover with features from indices of preceding iteration, machine learning model generation, . . .” These limitations recite the use of generic components (a first plurality of machine learning models, a second plurality of machine learning models) to implement the abstract idea, and do not amount to significantly more than the abstract idea. (MPEP § 2106.05(f)). The claim also recites more specifics or details of the additional element of “[(e), (i)] generating . . . models trained,” “[(e.1)] wherein each of the first plurality of machine learning models is trained with a respective generation index of the first plurality of generation indices,” and “[(i.1)] wherein each of the second plurality of machine learning models is trained with a unique combination of features from the second plurality of features,” and accordingly, are merely more specific to the additional element. The claim also recites the additional elements of “[(a)] obtaining . . . learning data to be used in machine learning model training,” “[(b)] performing, by the processor, data validation and generating configuration files required for a deep framework,” and [(c)] organizing, by the processor, the learning data for training, evaluation, and testing,” which are well-understood, routine, and conventional activity of storing, formatting, and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. The claim also recites more details or specifics to the additional element of “[(b)] performing . . . data validation,” “[(b.1)] wherein the deep framework builds deep learning models for production without requiring generation of additional code,” and accordingly, is merely more specific to the additional element. Therefore, claim 9 is subject matter ineligible. Claims 2, 3, and 4 depend directly or indirectly from claim 1. Claims 10, 11, and 12 depend directly or indirectly from claim 9. The claims recite more details or specifics to the abstract idea of “[(d)] generating . . . a first plurality of generation indices,” (claims 2 and 10: “[(d.1)] wherein the first plurality of generation indices comprises generation indices specifying the plurality of features of the learning data”; claims 3 and 11: “[(d.2)] wherein the first plurality of generation indices further comprises at least one of generation indices specifying structure of machine learning model to be generated, generation indices specifying training method of machine learning model associated with a feature, or generation indices specifying model type of machine learning model to be generated”; claims 4 and 12: “[(d.2)] wherein the first plurality of generation indices further comprises at least one of generation indices specifying number of intermediary layers to be included in a machine learning model, generation indices specifying number of nodes to be included in each of the intermediary layers, or generation indices specifying node connection of the number of nodes’), and accordingly, are simply more specific to the abstract idea. The abstract idea of these claims are not integrated into a practical application, (see MPEP § 2106.05(g)), nor do they amount to significantly more than the abstract idea, (MPEP § 2106.05(d)), because the claims recite no more than the abstract idea. Accordingly, claims 2-4 and 10-12 are subject-matter ineligible. Claim 5 depends directly or indirectly from claim 1. Claim 13 depends directly or indirectly from claim 9. The claims recite more specifics or details to the abstract idea of “[(f)] determining . . . model accuracy,” (claims 5 and 13: [(f.1)] wherein determining model accuracy for each of the first plurality of machine learning models comprises evaluating model accuracy for each of the first plurality of machine learning models using the evaluation data”), and accordingly, are merely more specific to the abstract idea. Also, the claim recites more specifics or details to the additional elements of “[(a)] obtaining . . . learning data,” (claims 5 and 13: “[(a.1)] wherein the learning data is split into training data and evaluation data”), and the additional element of “[(e)] generating the first plurality of machine learning models,” (claims 5 and 13: “[(e.1)] wherein generating the first plurality of machine learning models comprises training the first plurality of machine learning models with the training data and the plurality of features of the learning data”), which are each merely more specific to the respective additional element. The abstract idea of these claims are not integrated into a practical application, (see MPEP § 2106.05(g)), nor do they amount to significantly more than the abstract idea, (MPEP § 2106.05(d)), because the claims recite no more than the abstract idea. Accordingly, claims 5 and 13 are subject-matter ineligible. Claims 6-8 depend directly or indirectly from claim 1. Claims 14-16 depend directly or indirectly from claim 9. The claims recite more details or specifics of the additional element of “[(a)] obtaining learning data,” (claims 6 and 14: “wherein the plurality of features of the learning data are statistical features of the learning data”; claims 7 and 15: “[(a.1)] wherein the learning data comprises one of integers, floating-point numbers, or strings”; claims 8 and 16: “[(a.1)] wherein the learning data comprises integers . . . .”), and accordingly, are merely more specific to the additional element. Also, the claims recite more details or specifics of the abstract idea of “[(d)] generating a first plurality of generation indices,” (claims 8 and 16: “[(d.1)] wherein . . . and the first plurality of generation indices is generated based on contiguity of the learning data”), and accordingly, is merely more specific to the abstract idea. The abstract idea of these claims are not integrated into a practical application, (see MPEP § 2106.05(g)), nor do they amount to significantly more than the abstract idea, (MPEP § 2106.05(d)), because the claims recite no more than the abstract idea. Accordingly, claims 6-8 and 14-16 are subject-matter ineligible. Claim Rejections – 35 U.S.C. § 102 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 6. Claims 1 and 9 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by US Published Application 20180314938 to Andoni et al [hereinafter Andoni ‘938]. Regarding claims 1 and 9, Andoni ‘938 teaches [a] method for optimizing machine learning model generation (Andoni ‘938, Abstract, teaches “he method also includes generating an input data set of the AMB engine based on application of one or more rules to the one or more data sources. The method further includes, based on the input data set and the machine learning problem type, initiating execution of the AMB engine to generate a neural network configured to model at least a portion of the input data set”) of claim 1, and [a] non-transitory computer readable medium configured to execute machine readable instructions stored in a storage, for optimizing machine learning model generation (Andoni ‘938 ¶ 0114 teaches “a computer-readable storage device stores instructions that, when executed, cause a computer to perform operations including, based on a fitness function”) of claim 9, comprising: [(a)] obtaining, by a processor (Andoni ‘938 ¶ 0115 teaches “a method includes receiving, at a processor of a computing device, input that identifies one or more data sources [(that is, by a processor)]”), learning data to be used in machine learning model training (Andoni ‘938 ¶¶ 0021-22 teaches “[t]he genetic algorithm 110 and the backpropagation trainer 180 may cooperate to automatically generate a neural network model of a particular data set, such as an illustrative input data set 102. . . . The system 100 may provide an automated model building process that enables even inexperienced users to quickly and easily build highly accurate models based on a specified data set [(that is, the “specified data set” and/or “input data set 102” is obtaining, by a processor, learning data to be used in machine learning model training)]”); [(b)] performing, by the processor, data validation and generating configuration files (Andoni ‘938 ¶ 0043 teaches “a data profiler 320 that examines data fields (e.g., columns) and determines various information regarding the data fields based on application of one or more rules 328 [(that is, “examines data fields” is performing . . . data validation)]; Andoni ‘938 ¶ 0116 teaches “an automated model building (AMB) pre-processor configured to receive input that identifies one or more data sources and to determine, based on the input, a machine learning problem type of a plurality of machine learning problem types supported by an AMB engine [(that is, to “determine a machine learning problem” is generating configuration files)]”) required for a deep framework (Andoni ‘938 ¶ 0088 teaches “the system 100 may represent a single automated model building framework that is capable of generating neural networks [(that is, a deep framework)] for at least regression problems, classification problems, and reinforcement learning problems”), [(b.1)] wherein the deep framework (Andoni ‘938 ¶ 0043 teaches “a single automated model building framework [(that is, the deep framework )]”) builds deep learning models for production without requiring generation of additional code (in view of the single automated model building framework,” Andoni ‘938 ¶ 0004 further teaches that “an ‘automated model building engine’ may be one or more devices, modules, or components configured to determine at least one machine learning solution (e.g., neural network) that models all or a portion of an input data set. The ability to automatically initialize a model building engine based on provided data sources without a priori knowledge of the type of machine learning problem to be solved enables data-driven model creation for multiple types of problems, . . . . In the example in which the automated model building engine utilizes a genetic algorithm and selective backpropagation, such a combination may enable generating a neural network that models a particular data set [(that is, the “automated model building framework” inherently generates a neural network without requiring generation of additional code)] with acceptable accuracy and in less time than using genetic algorithms or backpropagation alone”; Andoni ‘938 ¶ 0022 teaches “automated data-driven model building process that enables even inexperienced users to quickly and easily build highly accurate models based on a specified data set [(that is, “inexperienced users” inherently deploy the “automated model building framework” to builds deep learning models for production without requiring generation of additional code)]”); [(c)] organizing, by the processor, the learning data for training, evaluation, and testing (Andoni ‘938 ¶ 0049 teaches “For example, the combined data source [(that is, learning data)] may be divided into training [(that is, organizing . . . the learning data for training)] and testing sets [(that is, organizing . . . the learning data for . . . evaluation, and testing)], which may potentially include multiple testing sets for crossfold validation. Thus, it is to be understood that although the input data set 102 is shown in FIG. 1 as a single data set, the input data set 102 may represent one or more training sets and one or more testing sets [(that is, organizing . . . the learning data for training, evaluation, and testing)]”); [(d)] generating, by the processor, a first plurality of generation indices based on a plurality of features of the learning data (Andoni ‘938 ¶ 0024 teaches “[t]he input set 120 and the output set 130 may each include a plurality of models, where each model includes data representative of a neural network. For example, each model may specify a neural network by at least a neural network topology, a series of activation functions, and connection weights [(that is, “topology,” “activation functions,” and “connection weights” are a plurality of features of the learning data)]. The topology of a neural network may include a configuration of nodes of the neural network and connections between such nodes. The models may also be specified to include other parameters, including but not limited to bias values/functions and aggregation functions [(that is, “configurations of nodes and connections” is generating, by the processor, a first plurality of generation indices based on a plurality of features of the learning data)] ”; [Examiner notes that the plain meaning of the claim term “generation indices” is a configuration file specifying a type and a behavior of a model to be generated, in which the broadest reasonable interpretation of the term covers the teachings of at least the “neural network topology” of Andoni ‘938]; as an example, Andoni ‘938 ¶ 0005 teaches for “a home with four temperature sensors that periodically collect temperature readings in the living room (L), the dining room (D), the master bedroom (M), and the guest bedroom (G), respectively. In this example, a [specified] data set may include four columns, where each column corresponds to temperature readings from a particular sensor in a particular room, and where each row corresponds to a particular time at which the four sensors took a temperature reading [(that is, “temperature readings” are examples of a plurality of features of the learning data)]”;); [(e)] generating a first plurality of machine learning models trained with the learning data and the first plurality of generation indices (Andoni ‘938 ¶ 0020 & Fig. 1 teaches “the system 100 includes a genetic algorithm 110 and a backpropagation trainer 180 [Examiner annotations in dashed-line text boxes]: PNG media_image1.png 859 932 media_image1.png Greyscale Andoni ‘938 ¶ 0006 teaches a “genetic algorithm may start with a population of random models that each define a neural network with different topology, weights and activation functions [(that is, “a population of random models” is generating a first plurality of machine learning models)]”; Andoni‘ ‘938 ¶ 0034 teaches “[t]he backpropagation trainer 180 may utilize a portion, but not all of the input data set 102 to train the connection weights of the trainable model 122, thereby generating a trained model 182 [(that is, trained with the learning data and the first plurality of generation indices)]”), [(e.1)] wherein each of the first plurality of machine learning models is trained with a respective generation index of the first plurality of generation indices (Andoni ‘938 ¶ 0006 teaches “[o]ver the course of several epochs (also known as generations), the models may be evolved using biology-inspired reproduction operations, such as crossover (e.g., combining characteristics of two neural networks), mutation (e.g., randomly modifying a characteristic of a neural network), stagnation/extinction (e.g., removing neural networks whose accuracy has not improved in several epochs), and selection (e.g., identifying the best performing neural networks via testing [(wherein each of the first plurality of machine learning models is trained with a respective generation index of the first plurality of generation indices)]”); [(f)] determining, by the processor, model accuracy for each of the first plurality of machine learning models (Andoni ‘938 ¶ 0028 teaches “fitness function 140 is based on a frequency and/or magnitude of errors produced by testing a model on the input data set 102 [(that is, determining, by the processor, model accuracy for each of the first plurality of machine learning models)]”); [(g)] performing, by the processor, model selection to select models of a predetermined number having highest model accuracy from the first plurality of machine learning models (Andoni ‘938 ¶ 0104 teaches “[w]hen the termination criterion is satisfied, at 1220, the method 1200 may include selecting and outputting a fittest model, at 1222 [(that is, select models of a predetermined number)]”; Andoni ‘938 ¶ 0033 teaches “the trainable model 122 may represent an advancement with respect to the fittest models of the input set 120 [(that is, performing, by the processor, model selection to select models of a predetermined number having highest model accuracy from the first plurality of machine learning models)]”); [(h)] performing, by the processor, indices generation to generate a second plurality of generation indices based on a second plurality of features (Andoni ‘938 ¶ 0023 teaches “each iteration of the search process (also called an epoch or generation of the genetic algorithm) may have an input set (or population) 120 and an output set (or population) 130. The input set 120 of an initial epoch of the genetic algorithm 110 may be randomly or pseudo-randomly generated. After that, the output set 130 of one epoch may be the input set 120 of the next (non-initial) epoch, as further described herein [(that is, “each epoch or generation of the genetic algorithm” is inherently, and necessarily, performing, by the processor, indices generation to generate a second plurality of generation indices based on a second plurality of features)]”), [(h.1)] wherein of the second plurality of features is derived by performing genetic crossover of the features from generation indices that are associated with the models of the predetermined number (Andoni ‘938 ¶ 0075 & Fig. 10 teaches “genetically combining models may include crossover operations in which a portion of one model is added to a portion of another model [Examiner annotations in dashed-line text boxes]: PNG media_image2.png 890 631 media_image2.png Greyscale Andoni ‘938 ¶ 0032 teaches, in Fig. 1 above, that “crossover operation 160 and the mutation operation 170 is highly stochastic under certain constraints and a defined set of probabilities optimized for model building, which produces reproduction operations that can be used to generate the output set 130, or at least a portion thereof, from the input set 120 [(that is, wherein of the second plurality of features is derived by performing genetic crossover of the features from generation indices that are associated with the models of the predetermined number)]”); [(h.2)] wherein the performing the genetic crossover comprises providing a list of input features to be used at each operation during optimization processing (Andoni ‘938 ¶ 0008 teaches “[t]he processor may also select a subset of data structures based on their respective fitness values [(that is, a list of input features)] and may perform at least one of a crossover operation . . . with respect to at least one data structure of the subset to generate a trainable data structure”; Andoni ‘938 ¶ 0032 teaches the “crossover operation 160 . . . is highly stochastic under certain constraints and a defined set of probabilities optimized for model building [(that is, “subset of data structures” and “defined set of optimized probabilities” is the performing the genetic crossover comprises providing a list of input features to be used at each operation during optimization processing)]”), a number of iterations per trial (Andoni ‘938 ¶ 0038 teaches “each iteration of the search process (also called an epoch or generation of the genetic algorithm) may have an input set (or population) 120 and an output set (or population) 130; Andoni ‘938 ¶ 0038 teaches ”the user may provide input to limit a number of epochs that will be executed by the genetic algorithm 110 [(that is, wherein the performing the genetic crossover comprises . . . a number of iterations per trail)]”), and a number of results inherited for subsequent optimization process (Andoni ‘938 ¶ 0032 teaches the “crossover operation 160 . . . produces reproduction operations that can be used to generate the output set 130, or at least a portion thereof, from the input set 120 [(that is, “generate the output set” is wherein the performing the genetic crossover comprises . . . a number of results inherited for subsequent optimization process)]”), and [(h.3)] adjusting crossover rates used in the genetic crossover so that features resulting in more accurate models are inherited more frequently in next-generation features (Andoni ‘938 ¶ 0038 teaches “the user may specify a time limit indicating an amount of time that the genetic algorithm 110 has to generate the model, and the genetic algorithm 110 may determine a number of epochs that will be executed based on the specified time limit [(that is, “determine a number of epochs per specified time limit” is adjusting crossover rates used in the genetic crossover)]”; Andoni ‘938 ¶ 0006 teaches a “[t]raining a model that is generated by breeding the best performing population members of an epoch may serve to reinforce desired ‘genetic traits’ (e.g., neural network topology, activation functions, connection weights, etc.), and introducing the trained model back into the genetic algorithm may lead the genetic algorithm to converge to an acceptably accurate solution (e.g., neural network) faster, for example because desired ‘genetic traits’ are available for inheritance in later epochs of the genetic algorithm [(that is, “desired genetic traits” being features resulting in more accurate models are inherited more frequently in next generation features)]”); [(i)] performing machine learning model generation to generate a second plurality of machine learning models trained with the learning data and the second plurality of features (Andoni ‘938 ¶ 0008 teaches “the processor may further provide the trainable data structure to an optimization trainer that is configured to train the trainable data structure based on a portion of the input data set to generate a trained structure and to provide the trained data structure as input to a second iteration of the recursive search that is subsequent to the first iteration [(that is, performing machine learning model generation to generate a second plurality of machine learning models trained with the learning data and the second plurality of features)]”), [(i.1)] wherein each of the second plurality of machine learning models is trained with a unique combination of features from the second plurality of features (Andoni ‘938 ¶ 0076 & Fig. 9 teaches “the ‘overall elite’ models 860, 862, and 864 may be genetically combined to generate the trainable model 122 [Examiner annotations in dashed-line text boxes]: PNG media_image3.png 823 713 media_image3.png Greyscale Andoni ‘938 ¶ 0076 teaches “[t]he backpropagation trainer 180 may train connection weights of the trainable model 122 based on a portion of the input data set 102 [(that is, wherein each of the second plurality of machine learning models is trained with a unique combination of features from the second plurality of features)]”); [(j)] performing, by the processor, model accuracy determination to determine model accuracy for each of the second plurality of machine learning models (Andoni ‘938 ¶ 0087 teaches “one iteration of the genetic algorithm 110 may include both genetic operations and evaluating the fitness of every model and species. Training trainable models generated by breeding the fittest models of an epoch may improve fitness of the trained models without requiring training of every model of an epoch [(that is, “evaluating the fitness” is performing, by the processor, model accuracy determination to determine model accuracy for each of the second plurality of machine learning models)]”); [(k)] iteratively performing model selection, indices generation by performing genetic crossover with features from indices of preceding iteration, machine learning model generation, and model accuracy determination until a machine learning model having a model accuracy that surpasses an accuracy threshold is generated (Andoni ‘938 ¶ 0006 teaches “[t]he genetic algorithm may start with a population of random models that each define a neural network with different topology, weights and activation functions. Over the course of several epochs (also known as generations) [(that is, “epochs” or “generations” are iteratively performing model selection)], the models may be evolved using biology-inspired reproduction operations, such as crossover (e.g., combining characteristics of two neural networks) [(that is, indices generation by performing genetic crossover with features from indices of preceding iteration)], . . . and selection (e.g., identifying the best performing neural networks via testing). In addition, the best performing models of an epoch may be selected for reproduction to generate a trainable model”); and [(l)] selecting the machine learning model having the model accuracy that surpasses the accuracy threshold for deployment (Andoni ‘938 ¶ 0085 teaches the “[o]peration at the system 100 may continue iteratively until specified a termination criterion, such as a . . . threshold fitness value (of an overall fittest model) is satisfied [(that is, accuracy threshold)]. When the termination criterion is satisfied, an overall fittest model of the last executed epoch may be selected and output as representing a neural network that best models the input data set 102 [(that is, selecting the machine learning model having he model accuracy that surpasses the accuracy threshold for deployment)]”). Claim Rejections - 35 U.S.C. § 103 7. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 9. This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. 10. Claims 2-4 and 10-12 are rejected under 35 U.S.C. § 103 as being unpatentable over US Published Application 20180314938 to Andoni et al. [hereinafter Andoni ‘938] in view of US Published Application 20210035025 to Kalluri et al. [hereinafter Kalluri]. Regarding claims 2 and 10, Andoni ‘938 teaches all of the limitations of claims 1 and 9, respectively, as described above in detail. Though Andoni ‘938 teaches input data relating to an environment of sensors, such as temperature sensors and/or a large array of sensors distributed around a wind farm; Andoni ‘938, however, does not explicitly teaches - wherein the first plurality of generation indices comprises generation indices specifying the plurality of features of the learning data. But Kalluri teaches - wherein the first plurality of generation indices comprises generation indices specifying the plurality of features of the learning data (Kalluri ¶ 0092 teaches “[a]n ML system may generate a summary vector for each example in a set of training examples. The ML system may use the summary vectors, in isolation or in conjunction with other features, to train a ML model to estimate unknown labels for new examples based on learned patterns [(that is, the “summary vector” is generation indices specifying the plurality of features of the learning data)]”). Andoni ‘938 and Kalluri are from the same or similar field of endeavor. Andoni ‘938 teaches an input data set and a plurality of data structures, each of the plurality of data structures including data representative of a neural network. Kalluri teaches techniques for summarizing lists for machine learning operations. Thus, it would have been obvious to person having ordinary skill in the art as of the effective filing date of Applicant’s invention to modify Andoni ‘938 pertaining to an input data set and a plurality of data structures for applying a genetic algorithm to machine learning models with the machine learning model summarization lists of Kalluri. The motivation to do so is because “[a] summary vector allows for a more memory efficient and compact characterization of a list. . . . a summary vector for a given list of items is generated as a function of the distribution of the list elements over a pre-determined set of item-clusters. . . . The result is a summary vector that has a length equal to the number of clusters in the pre-determined set of item-clusters. Thus, the summary vector is a compact representation that conveys meaningful information about the distribution of items in a list. This information may not be readily apparent from the raw feature vector values and may also be useful in a variety of ML applications.” (Kalluri ¶¶ 0033-34). Regarding claims 3 and 11, the combination of Andoni ‘938 and Kalluri teaches all of the limitations of claims 2 and 10, respectively, as described above in detail. Kalluri teaches - wherein the first plurality of generation indices further comprises at least one of generation indices specifying structure of machine learning model to be generated, generation indices specifying training method of machine learning model associated with a feature (Kalluri ¶ 0036 teaches “the summarization techniques” is generation indices )] are used to train ML models. An ML system may receive a set of training examples [(that is, training method)], where each example is associated with a list and a label [(that is, generation indices specifying training method of machine learning model associated with a feature)]”), or generation indices specifying model type of machine learning model to be generated. Andoni ‘938 and Kalluri are from the same or similar field of endeavor. Andoni ‘938 teaches an input data set and a plurality of data structures, each of the plurality of data structures including data representative of a neural network. Kalluri teaches techniques for summarizing lists for machine learning operations. Thus, it would have been obvious to person having ordinary skill in the art as of the effective filing date of Applicant’s invention to modify Andoni ‘938 pertaining to an input data set and a plurality of data structures for applying a genetic algorithm to machine learning models with the machine learning model summarization lists of Kalluri. The motivation to do so is because “[a] summary vector allows for a more memory efficient and compact characterization of a list. . . . a summary vector for a given list of items is generated as a function of the distribution of the list elements over a pre-determined set of item-clusters. . . . The result is a summary vector that has a length equal to the number of clusters in the pre-determined set of ite
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Sep 27, 2024
Non-Final Rejection — §101, §102, §103
Jan 14, 2025
Applicant Interview (Telephonic)
Jan 14, 2025
Examiner Interview Summary
Feb 05, 2025
Response Filed
May 14, 2025
Final Rejection — §101, §102, §103
Aug 04, 2025
Response after Non-Final Action
Sep 30, 2025
Request for Continued Examination
Oct 09, 2025
Response after Non-Final Action
Dec 04, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591815
METHOD AND SYSTEM FOR UPDATING MACHINE LEARNING BASED CLASSIFIERS FOR RECONFIGURABLE SENSORS
2y 5m to grant Granted Mar 31, 2026
Patent 12585917
REINFORCEMENT LEARNING USING ADVANTAGE ESTIMATES
2y 5m to grant Granted Mar 24, 2026
Patent 12547759
PRIVACY PRESERVING MACHINE LEARNING MODEL TRAINING
2y 5m to grant Granted Feb 10, 2026
Patent 12530613
SYSTEMS AND METHODS FOR PERFORMING QUANTUM EVOLUTION IN QUANTUM COMPUTATION
2y 5m to grant Granted Jan 20, 2026
Patent 12518214
DISTRIBUTED MACHINE LEARNING SYSTEMS INCLUDING GENERATION OF SYNTHETIC DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
37%
Grant Probability
55%
With Interview (+18.0%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month