Prosecution Insights
Last updated: April 19, 2026
Application No. 18/966,543

TECHNIQUES FOR GENERATING SYNTHETIC DATA

Non-Final OA §101§103
Filed
Dec 03, 2024
Examiner
MOUNDI, ISHAN NMN
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
SAS Institute Inc.
OA Round
3 (Non-Final)
12%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
2 granted / 16 resolved
-42.5% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
41 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/09/2025 has been entered. Response to Amendments The amendment filed 10/09/2025 has been entered. Claims 1-32 remain pending in the application. New claims 31-32 have been added. Response to Arguments Argument 1, regarding the 101 rejections, applicant argues that a human mind is not capable of executing machine learning models, and that training a model is not a judicial exception in view of the AI-SME memo and example 39. Examiner notes that in the 101 rejections, the training of the machine learning models were determined to be generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h) since the claimed machine learning models are recited in a high level of generality. Applicant argues that a human mind aided with a pencil and paper cannot practically generate synthetic data with machine learning models. As stated in the rejections below, the machine learning models are a tool used to generally link the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). A machine learning model is designed to mimic the human mind, as such under the broadest reasonable interpretation of the claims, the mental processes may be performed practically in the human mind, with the aid of a pencil, paper, and data but for the processors, as there is nothing in the claims nor the specification that indicate otherwise. Generating synthetic data, as recited in the claims, is interpreted as generating “fake” data, which can also be generated via pen and paper. The claims do not specify anything further about the synthetic data beyond being generated with machine learning models that indicate a human cannot perform the limitation. Applicant also argues that the machine learning models are performing complex computations at such a rate that a human cannot imitate. Examiner notes that the claims are silent with respect to the rate/speed of the performance. As such, this argument is directed towards an unclaimed limitation. In addition, even though a computer can perform some operations at a higher speed than a human, this does not negate the fact that a human can still perform said operations. Applicant also argues that a limitation involving a mathematical concept described in the specification may not be sufficient to group the limitation as a mathematical concept. Applicant argues that no mathematical formula is claimed in the independent claims. Examiner respectfully disagrees because the independent claims appear to claim an error function which is a combination of a prediction error function, similarity error function, and bias assessment error function, as well as computing an objective function value based upon one of the error functions. Under the broadest reasonable interpretation, "computing" is interpreted as calculating. As such, the claims are directed to mathematical concepts or mathematical relationships. Applicant also argues that the specification recites problems with existing synthetic data generation and also provides solutions to these problems that are reflected in the claims. Examiner respectfully disagrees because the alleged solutions are recited at a high level and there does not appear to be a clear improvement made to synthetic data generation beyond the abstract ideas described in the 101 rejections below. Applicant also argues that the claims recite technical solutions to technical problems. Applicant argues that the claims recite calculating a prediction error based on cluster centroids and real data, uses this prediction error to calculate an objective function, and improve the accuracy of the model based on a feedback loop dependent upon the objective function. Examiner notes that "determining that the objective function is an optimal value" is recited at a high level and that it is not clear how optimal value is determined in the claim language. These limitations do not amount to significantly more than the judicial exceptions because the solution includes determining a generic optimal value at a high level without any specific limiting steps or features beyond updating hyperparameters if the objective function value is not determined to be optimal. Updating parameters to determine another objective function value is interpreted as generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h) since the hyperparameters are recited in a high level of generality. The full 101 rejections are outlined below. Argument 2, regarding the 103 rejections, applicant argues that Li does not teach separate models generating separate sets of hyperparameters, specifically when lines 15-20 of column 7 were cited. Examiner respectfully disagrees because in the paragraph that follows after, specifically lines 41-43 of column 7, Li teaches a “model and training database 406 is present and configured to store training/test datasets and developed models”, meaning multiple models are taught by Li. The examiner notes that applicant’s arguments of what the limitations “require” read limitations into the claims that are not present. For example, there is no requirement currently present in the claim that the sets of hyperparameters be “different”, that they are “distinct” or that they are input “simultaneously.” If applicant wishes these limitations to be present then the claims should be amended to say so. Applicant also argues that cluster centroids taught by Li are merely parameters of a model, whereas the claimed cluster centroids are produced by a learning model. C7:L19-33 teaches cluster centroids and number of clusters as parameters of a learning model. C7:L52 teaches machine learning models may incorporate algorithms, including a clustering algorithm. Li further teaches the one or more machine learning models may be clustering algorithms, which output clusters (see Li C7:L41-52). Therefore, Li teaches hyperparameters of a machine learning model being used to generate clusters. Applicant also argues C7:L30 of Li does not teach generating synthetic data vectors based on cluster centroids, hyperparameters, and real data using a machine learning model. Synthetic data vectors are defined in the specification of the instant application as artificial, simulated, or fake data, that is generated using mathematical or computational models. C7:L30 teaches cluster centroids as a hyperparameter, being used in the model training stage. The model training stage also describes generation of cluster centroids as well as tuning hyperparameters, and also using a hidden Markov model with input data 401, which is the set of real data. Hidden Markov models are routinely used to generate synthetic data, especially data that is not observable, based on real data that exists such as input data 401. The model training stage also teaches the use of support vector machines, which output vectors. Therefore, Li teaches the use of cluster centroids, hyperparameters, and a set of real-world data being used to generate synthetic data with the use of hidden Markov models. The entire model training stage is described in C6:L58-67, C7:L1-56. Applicant also argues that Li does not teach using one machine learning model to generate cluster centroids and another machine learning model to generate synthetic data. Examiner respectfully disagrees because Li teaches “the one or more machine and/or deep learning models may comprise any suitable algorithm known to those with skill in the art including, but not limited to: LLMs, … unsupervised learning algorithms such as clustering algorithms, hidden Markov models, singular value decomposition, and/or the like”, C7:L44-53. Applicant also argues that Li cannot teach an error function including a similarity error function, a prediction error function, and bias assessment error. Applicant argues that Bera does not teach multiple models, and that the CER and WER taught in Bera would replace a loss function taught in Li. Applicant also argues that the similarity function and predictive models taught in Cella teaches predictions of future and current data values instead of a difference between synthetic data and real data. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). If the references were to be combined, the loss function taught in Li would not disappear, and Li teaches the use of multiple models. Regarding Cella, as Li teaches the use of synthetic data generation and a real data set, the current and future data predictions taught in Cella may be combined with Li to teach the difference between the synthetic data and real data taught in Li. Examiner also notes if Cella were to be combined with Li, the loss function of Li would not disappear if Cella’s similarity function were to be incorporated in Li’s loss function. The full prior art rejections are outlined below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: The claims recite a non-transitory computer-readable storage medium storing a program, system, and method, each of which are one of the four categories of eligible subject matter. Claims 1, 14, and 27 Step 2A Prong 1: The claims recite the following limitations: A system, method, and non-transitory computer-readable medium comprising computer-readable instructions stored thereon that when executed by a processor cause the processor to: (A) generate values of a first set of hyperparameters for a first trained machine learning model and values of a second set of hyperparameters for a second trained machine learning model (Mental Process); …(C) generate a plurality of cluster centroids from the set of real data and the values of the first set of hyperparameters using the first trained machine learning model (Mental Process); (D) generate a plurality of synthetic data vectors based on the plurality of cluster centroids, the values of the second set of hyperparameters, and the set of real data using the second trained machine learning model (Mental Process); (E) compute an error function based on the plurality of synthetic data vectors for the first set of hyperparameters and the second set of hyperparameters based on predictions made by a third machine learning model, wherein the error function comprises a combination of a similarity error function indicative of a difference in marginal probability distribution between the plurality of synthetic data vectors and the set of real data, a prediction error function indicative of a difference in conditional probability distribution between the plurality of synthetic data vectors and the set of real data, and a bias assessment error function indicative of bias in the first trained machine learning model or the second trained machine learning model (Mathematical Concept); (F) compute an objective function value based on at least one of the similarity error function, the prediction error function, or the bias assessment error function (Mathematical Concept); (G) determine that the objective function is not an optimal value (Mental Process); (H) responsive to determining that the objective function value is not an optimal value, update the values of the first set of hyperparameters and the values of the second set of hyperparameters (Mental Process). Under the broadest reasonable interpretation of the claims, the mental processes may be performed practically in the human mind, with the aid of a pencil, paper, and data but for the processors, as there is nothing in the claims nor the specification that indicate otherwise. Computing an error function with a similarity error, prediction error, and calculated bias, and computing an objective function value are mathematical concepts under the broadest reasonable interpretation. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites the following additional elements: (B) input the values of the first set of hyperparameters and a set of real data into the first trained machine learning model and the values of the second set of hyperparameters and the set of real data into the second trained machine learning model;… or responsive to determining that the objective function value is an optimal value, output the plurality of synthetic data vectors as a set of synthetic data. Inputting values into a learning model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Outputting data is mere data gathering, which is an insignificant extra-solution activity as discussed in MPEP 2106.05(g). Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Inputting values into a learning model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Outputting data is mere data gathering, which is an insignificant extra-solution activity as discussed in MPEP 2106.05(g). Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are not patent eligible. Claims 31 and 32 Step 2A Prong 1: The claims recite the following limitations: determining a multi-stage objective performance metric representing convergence of the first generative ANN and the second generative ANN based on aggregated training losses (Mathematical Concept and Mental Process); controlling, by a training control circuit, a continuation or a termination of the coordinated multi-stage training based on the multi-stage objective performance metric (Mathematical Concept and Mental Process); and when the multi-stage objective performance metric fails to satisfy one or more training thresholds, using the multi-stage objective performance metric to control a hyperparameter search operation of a hyperparameter generation circuit for tuning the first set of hyperparameters and the second set of hyperparameters (Mathematical Concept and Mental Process). Under the broadest reasonable interpretation of the claims, determining a performance metric and controlling the continuation or cancellation of training based on the performance metric may be performed practically in the human mind, with the aid of a pencil, paper, and data but for the processors, as there is nothing in the claims nor the specification that indicate otherwise. Computing performance data is a mathematical concept under the broadest reasonable interpretation. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not integrated into a practical application. The claim recites the following additional elements: A computer implemented method and system comprising: a database storing non-synthetic data samples; a memory storing computer-executable instructions; and one or more processing circuits configured to execute the computer-executable instructions to perform operations comprising: initializing one or more training configurations for training a first generative artificial neural network (ANN) using an input of a first set of hyperparameters; training, by a processing circuit executing the one or more training configurations for the first generative ANN, the first generative ANN to generate cluster centroids using an input of a non-synthetic training set comprising non-synthetic data samples; in response to training the first generative ANN, generating, by the processing circuit executing the first generative ANN, a plurality of cluster centroids associated with the non-synthetic data samples; initializing one or more training configurations for training a second generative ANN using an input of a second set of hyperparameters; training, by the processing circuit executing the one or more training configurations for the second generative ANN, the second generative ANN to generate synthetic data samples using (a) an input of the non-synthetic training set and (b) an input of the plurality of cluster centroids generated by the first generative ANN; in response to training the first generative ANN, generating, by the processing circuit executing the first generative ANN, a plurality of cluster centroids associated with the non-synthetic data samples. Training neural networks using data is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Collecting data samples is mere data gathering, which is an insignificant extra-solution activity as discussed in MPEP 2106.05(g). Processors and memory are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Training neural networks using data is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Collecting data samples is mere data gathering, which is an insignificant extra-solution activity as discussed in MPEP 2106.05(g). Processors and memory are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are not patent eligible. Dependent Claims Claims 2 and 15 Step 2A Prong 1: The judicial exceptions of claims 1 and 14 are incorporated. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application. The claims recite the following additional elements: wherein each of the first trained machine learning model and the second trained machine learning model is a generative machine learning model, and wherein an output from the first trained machine learning model is input into the second trained machine learning model. The machine learning models being generative models is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Using output data from one model as input for another model is mere data gathering, which is an insignificant extra-solution activity as discussed in MPEP 2106.05(g). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The machine learning models being generative models is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Using output data from one model as input for another model is mere data gathering, which is an insignificant extra-solution activity as discussed in MPEP 2106.05(g). The claims are not patent eligible. Claims 3, 16, and 28 Step 2A Prong 1: The judicial exceptions of claims 2, 15, and 27 are incorporated. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application. The claims recite the following additional elements: wherein the first trained machine learning model is a Gaussian Mixture Model, and the second trained machine learning model is a Generative Adversarial Network model. The machine learning models being a Gaussian Mixture Model and Generative Adversarial Network model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The machine learning models being a Gaussian Mixture Model and Generative Adversarial Network model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). The claims are not patent eligible. Claims 4 and 17 Step 2A Prong 1: The judicial exceptions of claims 1 and 14 are incorporated. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application. The claims recite the following additional elements: wherein the third machine learning model is a random forest model. A machine learning model being a random forest model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. A machine learning model being a random forest model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). The claims are not patent eligible. Claims 5 and 18 Step 2A Prong 1: The judicial exceptions of claims 1 and 14 are incorporated. The claims recite the following limitations: execute the fifth machine learning model to generate the similarity error function based on the plurality of synthetic data vectors and the set of real data (Mathematical Concept); … generate the prediction error function and the bias assessment error function based on the plurality of synthetic data vectors (Mathematical Concept). Generating similarity error and prediction error functions are mathematical concepts under the broadest reasonable interpretation of the claims. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application. The claims recite the following additional elements: execute the fifth machine learning model to … and execute the sixth machine learning model to …wherein the third machine learning model comprises a fifth machine learning model and a sixth machine learning model, and wherein to compute the error function, the computer-readable instructions further cause the processor to: The use of the fifth and sixth machine learning models is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The use of the fifth and sixth machine learning models is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are not patent eligible. Claims 6, 19, and 29 Step 2A Prong 1: The judicial exceptions of claims 5, 18, and 27 are incorporated. The claims recite the following limitations: shuffle the plurality of rows to obtain a plurality of shuffled rows of the plurality of combined data vectors (Mental Process); add a first binary label to each of the plurality of shuffled rows, the first binary label indicating whether each of the plurality of shuffled rows comprises an actual real data vector or an actual synthetic data vector (Mental Process); … classify each of the plurality of combined data vectors … into either predicted real data or predicted synthetic data, the classification indicated by a second binary label added to each of the plurality of shuffled rows of the plurality of combined data vectors (Mental Process); compute a loss function based on the first binary label and the second binary label; and multiply the loss function with a negative 1 to obtain the similarity error function (Mathematical Concept). Under the broadest reasonable interpretation, shuffling data, labeling data, and classifying data are all mental processes that may be practically performed in a human’s mind with the aid of the data. Computing a loss function and multiplying the loss function is a mathematical concept under the broadest reasonable interpretation. Claim 29 also recites “wherein the loss function comprises a cross-entropy loss function or a Bernoulli loss function”. Including a cross-entropy loss function or a Bernoulli loss function is a mathematical concept under the broadest reasonable interpretation of the claim language. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application. The claims recite the following additional elements: combine the plurality of synthetic data vectors with the set of real data to obtain a plurality of combined data vectors, wherein the plurality of combined data vectors are arranged in a plurality of rows and a plurality of columns, and wherein each row of the plurality of rows corresponds to one of the plurality of combined data vectors;… input the plurality of combined data vectors into the fifth machine learning model;… using the fifth machine learning model… Combining and arranging datasets is mere data gathering, which is an insignificant extra-solution activity as discussed in MPEP 2106.05(g). Inputting data into a fifth machine learning model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Combining and arranging datasets is mere data gathering, which is an insignificant extra-solution activity as discussed in MPEP 2106.05(g). Inputting data into a fifth machine learning model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are not patent eligible. Claims 7 and 20 Step 2A Prong 1: The judicial exceptions of claims 6 and 19 are incorporated. The claims recite the following limitations: wherein the loss function comprises a cross-entropy loss function or a Bernoulli loss function (Mathematical Concept). Including a cross-entropy loss function or a Bernoulli loss function is a mathematical concept under the broadest reasonable interpretation of the claim language. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application and the claims do not recite additional elements. The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible. Claims 8, 21, and 30 Step 2A Prong 1: The judicial exceptions of claims 5, 18, and 27 are incorporated. The claims recite the following limitations: predict a second target value … based on the set of real data (Mental Process); and compute the prediction error function as a loss function based on the first target value and the second target value (Mathematical Concept). Predicting a value based on existing data is a mental process under the broadest reasonable interpretation of the claim language because it can be practically performed in a human’s mind with the aid of a pencil, paper, and data, but for the processor. Computing the prediction error function as a loss function is a mathematical concept under the broadest reasonable interpretation of the claim language. Claim 30 also recites “wherein the loss function comprises a cross-entropy loss function or a least squares estimation function” (Mathematical Concept). Including cross-entropy loss or a least squared estimation function in the loss function is a mathematical concept under the broadest reasonable interpretation of the claim language. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application. The claims recite the following additional elements: train the sixth machine learning model using the plurality of synthetic data vectors to predict a first target value; input the set of real data into the sixth machine learning model that has been trained with the plurality of synthetic data vectors; Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). Training and inputting data into a sixth machine learning model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). Training and inputting data into a sixth machine learning model is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). The claims are not patent eligible. Claims 9 and 22 Step 2A Prong 1: The judicial exceptions of claims 8 and 21 are incorporated. The claims recite the following limitations: wherein the loss function comprises a cross-entropy loss function or a least squares estimation function (Mathematical Concept). Including cross-entropy loss or a least squared estimation function in the loss function is a mathematical concept under the broadest reasonable interpretation of the claim language. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application and the claims do not recite additional elements. The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible. Claims 10 and 23 Step 2A Prong 1: The judicial exceptions of claims 5 and 18 are incorporated. The claims recite the following limitations: predict a first target value; identify at least one sensitive variable from the plurality of synthetic data vectors (Mental Process); compute one or more of a demographic parity value, a predictive parity value, an equal accuracy value, an equalized odds value, or an equal opportunity value for each of the at least one sensitive variable (Mathematical Concept); and combine the computed one or more of the demographic parity value, the predictive parity value, the equalized odds value, or the equal opportunity value as the bias assessment error function (Mathematical Concept). Predicting a target value and identifying a variable are mental processes under the broadest reasonable interpretation of the claim language. Computing one or more of a demographic parity value, a predictive parity value, an equal accuracy value, an equalized odds value, or an equal opportunity value for each of the at least one sensitive variable and combining one or more of the demographic parity value, the predictive parity value, the equalized odds value, or the equal opportunity value as the bias assessment error function are mathematical concepts under the broadest reasonable interpretation of the claim language. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application. The claims recite the following additional elements: train the sixth machine learning model using the plurality of synthetic data vectors to… Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). Training a machine learning model using synthetic data vectors is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). Training a machine learning model using synthetic data vectors is generally linking the abstract ideas to the technological environment of machine learning, as discussed in MPEP 2106.05(h). The claims are not patent eligible. Claims 11 and 24 Step 2A Prong 1: The judicial exceptions of claims 1 and 14 are incorporated. The claims recite the following limitations: wherein the optimal value of the objective function value corresponds to a value in which the similarity error function is less than a first threshold, the prediction error function is less than a second threshold, and the bias assessment error function is less than a third threshold (Mathematical Concept). Applying threshold values to a similarity error function, prediction error function, and bias assessment error function are mathematical concepts under the broadest reasonable interpretation of the claim language. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application and the claims do not recite additional elements. The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible. Claims 12 and 25 Step 2A Prong 1: The judicial exceptions of claims 1 and 14 are incorporated. The claims recite the following limitations: wherein the objective function value is a weighted average of the similarity error function, the prediction error function, and the bias assessment error function (Mathematical Concept). The objective function being a weighted average of the similarity error function, the prediction error function, and the bias assessment error function is a mathematical concept under the broadest reasonable interpretation of the claim language. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application and the claims do not recite additional elements. The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claims are not patent eligible. Claims 13 and 26 Step 2A Prong 1: The judicial exceptions of claims 1 and 14 are incorporated. The claims recite the following limitations: compare the weighted average of the similarity error function, the prediction error function, and the bias assessment error function with a predetermined threshold (Mental Process); and determine that the weighted average of the similarity error function, the prediction error function, and the bias assessment error function corresponds to the optimal value when the weighted average of the similarity error function, the prediction error function, and the bias assessment error function is greater than the predetermined threshold (Mental Process). Comparing a weighted average to a predetermined threshold and determining that the weighted average is greater than the predetermined threshold are mental processes under the broadest reasonable interpretation of the claim language. Accordingly, the claims recite an abstract idea. Step 2A Prong 2: The judicial exceptions are not incorporated into practical application. Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Processors are recited, which are generic computing components recited at a high level as a means to apply the judicial exception, as discussed in MPEP 2106.05(f). The claims are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 8-18, 21-28, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al (Pub. No.: US 12119848 B1), hereafter Li in view of Bera et al (Pub. No.: US 20240071367 A1), hereafter Bera and Cella et al (Pub. No.: US 20240144141 A1), hereafter Cella. Regarding claims 1, 14, and 27, Li teaches a system, method, and a non-transitory computer-readable medium comprising computer-readable instructions stored thereon that when executed by a processor cause the processor to: (A) generate values of a first set of hyperparameters for a first trained machine learning model and values of a second set of hyperparameters for a second trained machine learning model; (B) input the values of the first set of hyperparameters and a set of real data into the first trained machine learning model and the values of the second set of hyperparameters and the set of real data into the second trained machine learning model (during model training, there are various parameters and hyperparameters. Algorithmic tuning of parameters and hyperparameters is done in between iterations. C7:L15-20); (C) generate a plurality of cluster centroids from the set of real data and the values of the first set of hyperparameters using the first trained machine learning model (cluster centroids are included as part of the list of adjustable hyperparameters of a model during training, C7:L19-33. The one or more machine learning models may be clustering algorithms, which output clusters, C7:L41-52); (D) generate a plurality of synthetic data vectors based on the plurality of cluster centroids, the values of the second set of hyperparameters, and the set of real data using the second trained machine learning model (“For example, data preprocessing can include, but is not limited to, tasks related to data cleansing, data deduplication, data normalization, data transformation, handling missing values, feature extraction and selection, mismatch handling, and/or the like”, C7:L1-5); (E) compute an error function based on the plurality of synthetic data vectors for the first set of hyperparameters and the second set of hyperparameters based on predictions made by a third machine learning model (loss function is produced based onsets of hyperparameters and predictions made by the model, C7:L15-40). Li does not appear to explicitly teach “a bias assessment error function indicative of bias in the first trained machine learning model or the second trained machine learning model”. Bera teaches a bias assessment error function indicative of bias in the first trained machine learning model or the second trained machine learning model (demographic bias detection in models may be calculated by Character Error Rate (CER) and/or Word Error Rate (WER), P0046); Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Li and Bera before them, to include Bera’s specific teaching of calculated demographic bias detection in models in Li’s system of Controllable Loss Compression Using Joint Learning. One would have been motivated to make such a combination of calculated demographic bias detection in models (see Bera P0046), and having bias as an adjustable hyperparameter to perform algorithmic tuning of the model according to bias (see Li C7:L20). Li in view of Bera does not appear to explicitly teach wherein the error function comprises a combination of a similarity error function indicative of a difference in marginal probability distribution between the plurality of synthetic data vectors and the set of real data, a prediction error function indicative of a difference in conditional probability distribution between the plurality of synthetic data vectors and the set of real data, … (F) compute an objective function value based on at least one of the similarity error function, the prediction error function, or the bias assessment error function; (G) determine that the objective function is not an optimal value; (H) responsive to determining that the objective function value is not an optimal value, update the values of the first set of hyperparameters and the values of the second set of hyperparameters and repeat (B)-(G), or responsive to determining that the objective function value is an optimal value, execute (H); and (I) output the plurality of synthetic data vectors as a set of synthetic data. Cella teaches wherein the error function comprises a combination of a similarity error function indicative of a difference in marginal probability distribution between the plurality of synthetic data vectors and the set of real data (“Similarity learning may include learning, by the machine learning model 3000, from examples using a similarity function, the similarity function being designed to measure how similar or related two objects are”, P0450), a prediction error function indicative of a difference in conditional probability distribution between the plurality of synthetic data vectors and the set of real data (prediction error between current and future data values may be calculated, P2726), … (F) compute an objective function value based on at least one of the similarity error function, the prediction error function, or the bias assessment error function (The machine learning model 3000 may learn one or more functions via iterative optimization of an objective function, functions including a similarity function and prediction, P0450); (G) determine that the objective function is not an optimal value; (H) responsive to determining that the objective function value is not an optimal value, update the values of the first set of hyperparameters and the values of the second set of hyperparameters and repeat (B)-(G), or responsive to determining that the objective function value is an optimal value, (objective function is optimized with iterative optimization, P0450) output the plurality of synthetic data vectors as a set of synthetic data (Once optimized, the objective function may provide the machine learning model 3000 with the ability to accurately determine an output for inputs other than inputs included in the training data, P0450). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Li, Bera, and Cella before them, to include Cella’s specific teachings of calculated prediction and similarity error in models in Li’s system of Controllable Loss Compression Using Joint Learning. One would have been motivated to make such a combination of calculated prediction and similarity error (see Cella P0450 and P2726), and having loss functions as an adjustable hyperparameter to perform algorithmic tuning of the model according to loss (see Li C7:L26). Regarding claims 2 and 15, Li in view of Bera and Cella teaches the limitations of claims 1 and 14 as outlined above. Cella further teaches wherein each of the first trained machine learning model and the second trained machine learning model is a generative machine learning model, and wherein an output from the first trained machine learning model is input into the second trained machine learning model (“where a given type of neural network takes inputs from a data source or other neural network and provides outputs that are included within the input sets of another neural network until a flow is completed and a final output is provided”, P1154). Regarding claims 3, 16, and 28, Li in view of Bera and Cella teaches the limitations of claims 2, 15, and 27 as outlined above. Cella further teaches wherein the first trained machine learning model is a Gaussian Mixture Model, and the second trained machine learning model is a Generative Adversarial Network model (machine learning models may include gaussian models, P1150, or a GAN, P1161). Regarding claim 28, Li further teaches wherein the third machine learning model is a random forest model (machine learning models may be one of a random forest model, C7:L49-50). Regarding claims 4 and 17, Li in view of Bera and Cella teaches the limitations of claims 1 and 14 as outlined above. Li further teaches wherein the third machine learning model is a random forest model (machine learning models may be one of a random forest model, C7:L49-50). Regarding claims 5 and 18, Li in view of Bera and Cella teaches the limitations of claims 1 and 14 as outlined above. Cella further teaches wherein the third machine learning model comprises a fifth machine learning model and a sixth machine learning model, and wherein to compute the error function, the computer-readable instructions further cause the processor to: execute the fifth machine learning model to generate the similarity error function based on the plurality of synthetic data vectors and the set of real data (“Similarity learning may include learning, by the machine learning model 3000, from examples using a similarity function, the similarity function being designed to measure how similar or related two objects are”, P0450); and execute the sixth machine learning model to generate the prediction error function (prediction error between current and future data values may be calculated, P2726). Bera further teaches the bias assessment error function based on the plurality of synthetic data vectors (demographic bias detection in models may be calculated by Character Error Rate (CER) and/or Word Error Rate (WER), P0046. Data used in model may be synthetic, P0067, P0069). Regarding claims 8, 21, and 30, Li in view of Bera and Cella teaches the limitations of claims 5, 18, and 27 as outlined above. Cella further teaches train the sixth machine learning model using the plurality of synthetic data vectors to predict a first target value (machine learning model predicts current data, P2726); input the set of real data into the sixth machine learning model that has been trained with the plurality of synthetic data vectors; predict a second target value by the sixth machine learning model based on the set of real data (machine learning model predicts future data, P2726); and compute the prediction error function as a loss function based on the first target value and the second target value (prediction error is calculated as difference between current and future data, P2726). Regarding claim 30, Cella further teaches wherein the loss function comprises a cross-entropy loss function or a least squares estimation function (least squares function may be used, P1099). Regarding claims 9 and 22, Li in view of Bera and Cella teaches the limitations of claims 8 and 21 as outlined above. Cella further teaches wherein the loss function comprises a cross-entropy loss function or a least squares estimation function (least squares function may be used, P1099). Regarding claims 10 and 23, Li in view of Bera and Cella teaches the limitations of claims 5 and 18 as outlined above. Bera further teaches wherein to generate the bias assessment error function, the computer-readable instructions further cause the processor to: train the sixth machine learning model using the plurality of synthetic data vectors to predict a first target value (predictive model may be used to predict or classify a wider range of new information or unseen information ingested into the system. Data used in model may be synthetic, P0057, P0067-P0069); identify at least one sensitive variable from the plurality of synthetic data vectors (demographic characteristics identified by the model include age, gender, ethnicity, race, and accent of the speaker, P0012); compute one or more of a demographic parity value, a predictive parity value, an equal accuracy value, an equalized odds value, or an equal opportunity value for each of the at least one sensitive variable (Character Error Rate (CER) and/or Word Error Rate (WER) are calculated by the model, P0046); and combine the computed one or more of the demographic parity value, the predictive parity value, the equalized odds value, or the equal opportunity value as the bias assessment error function (demographic bias detection in models may be calculated by Character Error Rate (CER) and/or Word Error Rate (WER), P0046). Regarding claims 11 and 24, Li in view of Bera and Cella teaches the limitations of claims 1 and 14 as outlined above. Cella further teaches wherein the optimal value of the objective function value corresponds to a value in which the similarity error function is less than a first threshold, the prediction error function is less than a second threshold (“For every input in a training dataset, the output of the artificial neural network may be observed and compared with the expected output, and the error between the expected output and the observed output may be propagated back to the previous layer. The weights may be adjusted accordingly based on the error. This process is repeated until the output error is below a predetermined threshold”, P1172), Bera further teaches the bias assessment error function is less than a third threshold (threshold may be used to provide a qualitative determination as to whether the model is considered biased or not. Specifically, if the detected PER distance or difference is more significant that the threshold level, the model may be considered as biased. Otherwise, the model may be considered neutral. P0052). Regarding claims 12 and 25, Li in view of Bera and Cella teaches the limitations of claims 1 and 14 as outlined above. Cella further teaches wherein the objective function value is a weighted average of the similarity error function, the prediction error function, and the bias assessment error function (Objective function may be optimized based on learning of more than one function, P0450). Regarding claims 13 and 26, Li in view of Bera and Cella teaches the limitations of claims 12 and 25 as outlined above. Cella further teaches compare the weighted average of the similarity error function, the prediction error function, and the bias assessment error function with a predetermined threshold (in model training, weights may be assigned to specific task (functions), and compared to a threshold, P0444); and determine that the weighted average of the similarity error function, the prediction error function, and the bias assessment error function corresponds to the optimal value when the weighted average of the similarity error function, the prediction error function, and the bias assessment error function is greater than the predetermined threshold (if the data does not meet the threshold, the data is not sent meaning only optimal data remains, P0444). Claims 6, 7, 19, 20, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Bera and Cella and further in view of Voinea et al (Pub. No.: US 11797705 B1), hereafter Voinea. Regarding claims 6, 19, and 29, Li in view of Bera and Cella teaches the limitations of claims 5, 18, and 27 as outlined above. Li does not appear to explicitly teach combine the plurality of synthetic data vectors with the set of real data to obtain a plurality of combined data vectors, wherein the plurality of combined data vectors are arranged in a plurality of rows and a plurality of columns, and wherein each row of the plurality of rows corresponds to one of the plurality of combined data vectors; shuffle the plurality of rows to obtain a plurality of shuffled rows of the plurality of combined data vectors; add a first binary label to each of the plurality of shuffled rows, the first binary label indicating whether each of the plurality of shuffled rows comprises an actual real data vector or an actual synthetic data vector; input the plurality of combined data vectors into the fifth machine learning model; classify each of the plurality of combined data vectors using the fifth machine learning model into either predicted real data or predicted synthetic data, the classification indicated by a second binary label added to each of the plurality of shuffled rows of the plurality of combined data vectors; compute a loss function based on the first binary label and the second binary label; and multiply the loss function with a negative 1 to obtain the similarity error function. Voinea teaches combine the plurality of synthetic data vectors with the set of real data to obtain a plurality of combined data vectors (real world sensitive data is collected randomly. Generator 505 generate synthetic sensitive data to simulate the training set which are known to contain sensitive data. C10:L3-18), wherein the plurality of combined data vectors are arranged in a plurality of rows and a plurality of columns, and wherein each row of the plurality of rows corresponds to one of the plurality of combined data vectors (data may be represented by tables including rows and columns, C5:L3-5. Data input to model may be in vector form. C11:L52-59, figure 7); shuffle the plurality of rows to obtain a plurality of shuffled rows of the plurality of combined data vectors (data is collected/generated randomly, thus shuffled, C10:L1-3); add a first binary label to each of the plurality of shuffled rows, the first binary label indicating whether each of the plurality of shuffled rows comprises an actual real data vector or an actual synthetic data vector (binary label is assigned indicating if data is real or synthetic, C10:L18-27); input the plurality of combined data vectors into the fifth machine learning model (discriminator 510 takes both synthetic and real data as inputs, C10:L29-36); classify each of the plurality of combined data vectors using the fifth machine learning model into either predicted real data or predicted synthetic data, the classification indicated by a second binary label added to each of the plurality of shuffled rows of the plurality of combined data vectors (results of the classification is indicated by a binary label, C10:L18-27. More than one label may be assigned to data, C16:L30-32, C6:L41-43); compute a loss function based on the first binary label and the second binary label (discriminator 510 may include binary labels. Discriminator 510 has associated loss functions, C10:L18-27); and multiply the loss function with a negative 1 to obtain the similarity error function (loss function is shown to be multiplied by a negative 1, C15:L24-43). Regarding claim 29, Voinea teaches wherein the loss function comprises a cross-entropy loss function or a Bernoulli loss function (Cross entropy loss function may be used, C15:L22-42). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Li, Bera, Cella and Voinea before them, to include Voinea’s specific teaching of collecting and combining real world data with synthetic data, in the form of rows and columns, in Li’s system of Controllable Loss Compression Using Joint Learning. One would have been motivated to make such a combination of collecting and combining real world data with synthetic data, in the form of rows and columns (see Voinea C10:L3-18, C11:L52-59, figure 7), and support vector machines to analyze a model’s performance in classification (see Li C:L50). Regarding claims 7 and 20, Li in view of Bera and Cella and further in view of Voinea teaches the limitations of claims 6 and 19 as outlined above. Voinea further teaches wherein the loss function comprises a cross-entropy loss function or a Bernoulli loss function (Cross entropy loss function may be used, C15:L22-42). Claims 31-32 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Cella. Regarding claims 31 and 32, Li teaches a computer-implemented method and system comprising a database storing non-synthetic data samples; a memory storing computer-executable instructions; and one or more processing circuits configured to execute the computer-executable instructions to perform operations comprising: initializing one or more training configurations for training a first generative artificial neural network (ANN) using an input of a first set of hyperparameters (during training of a neural network, there are various parameters and hyperparameters. Algorithmic tuning of parameters and hyperparameters is done in between iterations. C7:L15-20); training, by a processing circuit executing the one or more training configurations for the first generative ANN, the first generative ANN to generate cluster centroids using an input of a non-synthetic training set comprising non-synthetic data samples (cluster centroids are included as part of the list of adjustable hyperparameters of a model during training of a neural network, C7:L19-33); in response to training the first generative ANN, generating, by the processing circuit executing the first generative ANN, a plurality of cluster centroids associated with the non-synthetic data samples (“For example, data preprocessing can include, but is not limited to, tasks related to data cleansing, data deduplication, data normalization, data transformation, handling missing values, feature extraction and selection, mismatch handling, and/or the like”, C7:L1-5); initializing one or more training configurations for training a second generative ANN using an input of a second set of hyperparameters (during training of a neural network, there are various parameters and hyperparameters. Algorithmic tuning of parameters and hyperparameters is done in between iterations. C7:L15-20); training, by the processing circuit executing the one or more training configurations for the second generative ANN, the second generative ANN to generate synthetic data samples using (a) an input of the non-synthetic training set and (b) an input of the plurality of cluster centroids generated by the first generative ANN (training output 404 is generated based on cluster centroids and other various hyperparameters, and input data, C7:L15-40); in response to training the second generative ANN, generating, by the processing circuit executing the second generative ANN, a plurality of synthetic data samples (“For example, data preprocessing can include, but is not limited to, tasks related to data cleansing, data deduplication, data normalization, data transformation, handling missing values, feature extraction and selection, mismatch handling, and/or the like”, C7:L1-5). Li does not appear to explicitly teach “determining a multi-stage objective performance metric representing convergence of the first generative ANN and the second generative ANN based on aggregated training losses; controlling, by a training control circuit, a continuation or a termination of the coordinated multi-stage training based on the multi-stage objective performance metric; and when the multi-stage objective performance metric fails to satisfy one or more training thresholds, using the multi-stage objective performance metric to control a hyperparameter search operation of a hyperparameter generation circuit for tuning the first set of hyperparameters and the second set of hyperparameters”. Cella teaches determining a multi-stage objective performance metric representing convergence of the first generative ANN and the second generative ANN based on aggregated training losses (The machine learning model 3000 may learn one or more functions via iterative optimization of an objective function, functions including a similarity function and prediction, P0450); controlling, by a training control circuit, a continuation or a termination of the coordinated multi-stage training based on the multi-stage objective performance metric (objective function is optimized with iterative optimization. Once optimized, the objective function may provide the machine learning model 3000 with the ability to accurately determine an output for inputs other than inputs included in the training data, P0450); and when the multi-stage objective performance metric fails to satisfy one or more training thresholds, using the multi-stage objective performance metric to control a hyperparameter search operation of a hyperparameter generation circuit for tuning the first set of hyperparameters and the second set of hyperparameters (“For every input in a training dataset, the output of the artificial neural network may be observed and compared with the expected output, and the error between the expected output and the observed output may be propagated back to the previous layer. The weights may be adjusted accordingly based on the error. This process is repeated until the output error is below a predetermined threshold”, P1172). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Li, Bera, and Cella before them, to include Cella’s specific teachings of calculated prediction and similarity error in models in Li’s system of Controllable Loss Compression Using Joint Learning. One would have been motivated to make such a combination of a calculated objective function and multi-stage learning (see Cella P0450 and P0172), and having loss functions as an adjustable hyperparameter to perform algorithmic tuning of the model according to loss (see Li C7:L26). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHAN MOUNDI whose telephone number is (703)756-1547. The examiner can normally be reached 8:30 A.M. - 5 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.M./Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Dec 03, 2024
Application Filed
Mar 17, 2025
Non-Final Rejection — §101, §103
May 22, 2025
Examiner Interview Summary
May 23, 2025
Response Filed
Jul 07, 2025
Final Rejection — §101, §103
Sep 08, 2025
Response after Non-Final Action
Sep 23, 2025
Examiner Interview Summary
Oct 09, 2025
Request for Continued Examination
Oct 15, 2025
Response after Non-Final Action
Jan 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561970
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE RECOGNITION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
12%
Grant Probability
46%
With Interview (+33.3%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month