Prosecution Insights
Last updated: April 19, 2026
Application No. 17/405,318

ACTIVE LEARNING OF DATA MODELS FOR SCALED OPTIMIZATION

Final Rejection §101§103
Filed
Aug 18, 2021
Examiner
TRAN, DAVID HOANG
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
Mit Massachusetts Institute Of Technology
OA Round
4 (Final)
14%
Grant Probability
At Risk
5-6
OA Rounds
4y 2m
To Grant
38%
With Interview

Examiner Intelligence

Grants only 14% of cases
14%
Career Allow Rate
2 granted / 14 resolved
-40.7% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
35 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
30.4%
-9.6% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
13.3%
-26.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 09/22/2025 on pages 7-9 of Remarks regarding the rejection under 35 U.S.C. 101 with respect to claims 1, 2, 8, 9, 15 and 16 have been fully considered but they are not persuasive. See updated rejection below. On pages 7-9 of Remarks, Applicant asserts that under 101 Step 2A Prong One the claims are not directed to an abstract idea because claim 1 reflects the improvement in finding the best data points to train a neural network-based surrogate model that reduces the number of data points needed to train a neural network-based surrogate model. However, Examiner respectfully disagrees. MPEP 2106.04(a)(2)(III)(c) talks about mental processes on a generic computer. Also, see 2106.05(f). The above mentioned sections of the MPEP set forth that a claim may recite a mental process even with the use of a generic computer. The steps taken to generate predictions for a problem and to filter input data points based on uncertainty measurements are steps that can be performed mentally. On page 8 of Remarks, Applicant asserts that under 101 Step 2A Prong Two the claims integrate the judicial exception into a practical application. Particularly, Applicant points out that a neural network-based surrogate model may be trained to solve complex design problems. However, Examiner respectfully disagrees. See 2106.05(f). The above mentioned sections of the MPEP set forth that a claim may recite a mental process even with the use of a generic computer. The steps taken to generate predictions for a problem and to filter input data points based on uncertainty measurements are steps that can be performed mentally. Claim 1 amounts to mere instructions to apply the exception using a neural network-based surrogate model and a partial differential equation solver (e.g., by using these elements as tools). See updated rejection below. Applicant’s arguments on pages 12-15 regarding the rejection under 35 U.S.C. 103 with respect to claims 1, 8, 9 and 15 have been fully considered but are moot. New reference Ghosh has been incorporated below to teach the newly presented limitations. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 8, 9 and 15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1, Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 1 is directed to a method, i.e., a process, one of the statutory categories. Step 2A Prong One Analysis: The limitations: “generating a first iteration of a training set for a design problem, wherein the first iteration of the training set includes an initial set of input data points labeled with their corresponding solutions to the design problem, and wherein each input data point in the initial set of input data points is a unit cell having a fixed set of geometric parameters;" “for each iteration of a number of iterations:” “selecting a new set of input data points, wherein each new input data point in the new set of input data points is a unit cell having the fixed set of geometric parameters,” “and wherein a number of new data points in the set of new data points is defined by M x K new input data points;” “generating, based on applying the new set of input data points to the trained neural network-based surrogate model, a set of predictions to the design problem and a set of uncertainty measurements associated with the set of predictions to the design problem;” “filtering the new set of input data points based on the set of uncertainty measurements associated with the set of predictions to the design problem to generate a subset of the new set of input data points, wherein the subset of the new set of input data points includes K new input data points selected from the new set of input data points that resulted in predictions to the design problem having the highest uncertainty measurements;” “generating, [using a partial differential equation solver,] solutions to the design problem for the subset of the new set of input data points;” “generating a next iteration of the training set, wherein the next iteration of the training set includes the initial set of input data points labeled with their corresponding solutions to the design problem, and the subset of the new set of input data points labeled with their corresponding solutions to the design problem and” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., generating, selecting, filtering). The above limitations in the context of this claim encompass, inter alia, generating a training set, selecting a new set of input data points, generating a set of predictions, filtering data points, generating a set of solutions, generating a next iteration of the training set (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application. The limitations: “[generating,] using a partial differential equation solver, [solutions to the design problem for the subset of the new set of input data points;]” “automatically retraining the neural network-based surrogate model with the next iteration of the training.” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a partial differential equation solver (e.g., by using these elements as tools). Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations: “[generating,] using a partial differential equation solver, [solutions to the design problem for the subset of the new set of input data points;]” “automatically retraining the neural network-based surrogate model with the next iteration of the training.” As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using a partial differential equation solver (e.g., by using these elements as tools). The claim is not patent eligible. Claim 8 recites a computer readable storage media for performing steps similar of claims 1 and is rejected with the same rationale, mutatis mutandis, in view of the following additional elements, considered individually and as an ordered combination with the additional elements identified above, failing to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea: one or more computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising: (This is a recitation of generic computer components to be used in performing the abstract idea, which does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).) Regarding Claim 9, Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 9 is directed to a method, i.e., a process, one of the statutory categories. Step 2A Prong One Analysis: The limitation: “wherein in each data point in the first set of data points includes a set of parameters associated with the problem.” As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., generating). The above limitations in the context of this claim encompass, inter alia, generating predictions based on first data points (corresponding to mental processes which can be done mentally or by pen and paper). Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 8. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible. Claim 15 recites a computer system for performing steps similar of claim 1 and is rejected with the same rationale, mutatis mutandis, in view of the following additional elements, considered individually and as an ordered combination with the additional elements identified above, failing to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea: one or more computer processors; one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, the program instructions comprising: (This is a recitation of generic computer components to be used in performing the abstract idea, which does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea. See MPEP 2106.05(f).) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8, 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Cozad et al. (Learning Surrogate Models for Simulation-Based Optimization); hereinafter Cozad in view of Geneva et al. (Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks); hereinafter Geneva and in further view of Ghosh et al. (Parametric Shape Optimization of Pin-Fin Arrays Using a Surrogate Model-Based Bayesian Method); hereinafter Ghosh Claim 1 is rejected over Cozad, Geneva and Ghosh. Regarding claim 1, Cozad teaches generating, based on applying the new set of input data points to the trained [neural network-based] surrogate model, a set of predictions to the design problem and a set of uncertainty measurements associated with the set of predictions to the design problem; (“Exploitation-based techniques sample in difficult-to-model areas such as points of high nonlinearity or discontinuity (problem). If the modeling method provides error estimates (such as kriging27) (associated with first set of predictions) or there is another available error metric, these estimates can be used to locate areas of high uncertainty.”, Adaptive sampling, page 2216, column 2); filtering the new set of input data points based on the set of uncertainty measurements associated with the set of predictions to the design problem to generate a subset of the new set of input data points, wherein the subset of the new set of input data points includes K new input data points selected from the new set of input data points that resulted in predictions to the design problem having the highest uncertainty measurements; (“The current surrogate model is tested subsequently against the simulation using an adaptive sampling technique that we call error maximization sampling (EMS). If the sampling technique discovers model inconsistency larger than a specified tolerance (highest uncertainty measurements), the newly sampled data points (generated subset of first set of input data based on the filtering) are added to the training set.”; page 2212; Note: The model inconsistencies correspond to the uncertainty.); generating, [using a partial differential equation solver,] solutions to the design problem for the subset of the new set of input data points; (“As the derivative-free solver progresses, the error (solution) can be calculated at newly sampled candidate points (high uncertainty measurements). If areas of sufficient model mismatch are located, the new points are added to the training set and the model is rebuilt. At the end of this step, the true model error can be estimated by what is, effectively, holdout cross-validation using the newly sampled points.”; page 2217) generating a next iteration of the training set, wherein the next iteration of the training set includes the initial set of input data points labeled with their corresponding solutions to the design problem, and the subset of the new set of input data points labeled with their corresponding solutions to the design problem and (“As the derivative-free solver progresses, the error (solution) can be calculated at newly sampled candidate points (high uncertainty measurements). If areas of sufficient model mismatch are located, the new points (subset of the first set of input data) are added to the training set and the model is rebuilt. At the end of this step, the true model error can be estimated by what is, effectively, holdout cross-validation using the newly sampled points.”; page 2217; Note: See Figure 1 of Cozad on page 2213 to see that the surrogate model is retrained with the updated training set.) automatically retraining the neural network-based surrogate model with the next iteration of the training set. (See Figure 1 of Cozad on page 2213 to see that the surrogate model is retrained with the updated training set.) Cozad does not teach generating a first iteration of a training set for a design problem, wherein the first iteration of the training set includes an initial set of input data points labeled with their corresponding solutions to the design problem, and wherein each input data point in the initial set of input data points is a unit cell having a fixed set of geometric parameters; training a neural network-based surrogate model with the first iteration of the training set for the design problem; and for each iteration of a number of iterations: selecting a new set of input data points, wherein each new input data point in the new set of input data points is a unit cell having the fixed set of geometric parameters, and wherein a number of new data points in the set of new data points is defined by M x K new input data points; However, Ghosh teaches generating a first iteration of a training set for a design problem, wherein the first iteration of the training set includes an initial set of input data points labeled with their corresponding solutions to the design problem, and wherein each input data point in the initial set of input data points is a unit cell having a fixed set of geometric parameters; (“By varying geometric parameters a, b, and c from 3.2 to 6 mm, a three-dimensional design space has been explored to obtain an optimum pin-fin shape that has the maximum efficiency in the developing region of the flow.”; page 246 and “An initial design of experiments (DOE) is selected using Latin hypercube sampling (LHS). A Reynolds-averaged Navier–Stokes (RANS) simulation is carried out for each of those design points. The surrogate model is initially trained with respect to this initial population. New design points are selected according to the acquisition function output after this initial training. Objective function values at these points are evaluated by performing a RANS simulation and a new surrogate model is obtained, which is improved by the Bayesian update.”; page 248; Note: See Figure 4 of Ghosh to see that the input data point selection is done iteratively.) training a neural network-based surrogate model with the first iteration of the training set for the design problem; and (See Figure 4 of Ghosh to see that the surrogate model is iteratively trained when there is budget remaining.) for each iteration of a number of iterations: selecting a new set of input data points, wherein each new input data point in the new set of input data points is a unit cell having the fixed set of geometric parameters, (“An initial design of experiments (DOE) is selected using Latin hypercube sampling (LHS). A Reynolds-averaged Navier–Stokes (RANS) simulation is carried out for each of those design points. The surrogate model is initially trained with respect to this initial population. New design points are selected according to the acquisition function output after this initial training. Objective function values at these points are evaluated by performing a RANS simulation and a new surrogate model is obtained, which is improved by the Bayesian update.”; page 248; Note: See Figure 4 of Ghosh to see that the box “Call acquisition function and search for optimum point in global design space” and “Obtain new design point” is M x K) and wherein a number of new data points in the set of new data points is defined by M x K new input data points; (See Figure 4 of Ghosh to see that the box “Call acquisition function and search for optimum point in global design space” and “Obtain new design point” is M x K) It would have been obvious before the effective filing date to combine the true error from the surrogate modeling of Cozad and the iterative surrogate model training involving geometric parameters of Ghosh to effectively optimize computation time (Ghosh, page 1). Cozad and Ghosh are analogous art because they both concern training of surrogate models. Cozad does not teach using a neural network-based surrogate model using a partial differential equation solver. However, Geneva teaches using a neural network-based surrogate model (“Section 3 discusses the auto-regressive dense encoder-decoder model, its training and use as a surrogate model.”; page 2, paragraph 5) using a partial differential equation solver. (“Often surrogate models are used to ease this computational burden by providing a fast approximate model that can imitate a standard numerical solver at a significantly reduced computational cost.”; Note: The standard numerical solver for partial differential equations (PDE) being simulated by surrogate models is the algorithm). It would have been obvious before the effective filing date to combine the true error from the surrogate modeling of Cozad and the simulation of PDE algorithm solvers of Geneva for the optimization and computational efficiency of simulated numerical solvers for solution generation (Geneva, page 1, 1. Introduction). Cozad and Geneva are analogous arts because they both concern training surrogate models for simulation-based optimization with uncertainty. Claim 8 is rejected over Cozad, Geneva and Ghosh. Regarding claim 8, Cozad teaches a computer program product comprising: one or more computer readable storage media and program instructions stored on the one or more computer readable storage media (“Automated learning of algebraic models for optimization (ALAMO), the computational implementation of the proposed methodology, along with examples and extensive computational comparisons between ALAMO and a variety of machine learning techniques”, Abstract; Note: These are computer implemented steps to be performed on computer processors and computer readable storage media), the program instructions comprising: The remainder of claim 8 is claim 1 in the form of a computer program product and is rejected for the same reasons as claim 1 stated above. Claim 9 is rejected over Cozad, Geneva and Ghosh with the incorporation of claim 8. Regarding claim 9, Cozad teaches wherein in each data point in the first set of data points includes a set of parameters associated with the problem (“The current surrogate model is tested subsequently against the simulation using an adaptive sampling technique that we call error maximization sampling (EMS). If the sampling technique discovers model inconsistency larger than a specified tolerance, the newly sampled data points are added to the training set. The surrogate models are iteratively rebuilt and improved until the adaptive sampling routine fails to find model inconsistencies.”, page 2212; Note: See Figure 1 of Cozad to see that the surrogate model is then rebuilt every time the training set is updated. The variables xi in the Figure 1 flowchart also include the data points associated with the problem). Claim 15 is rejected over Cozad, Geneva and Ghosh. Regarding claim 15, Cozad teaches a computer system comprising: one or more computer processors; one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors (“Automated learning of algebraic models for optimization (ALAMO), the computational implementation of the proposed methodology, along with examples and extensive computational comparisons between ALAMO and a variety of machine learning techniques”, Abstract; Note: These are computer implemented steps to be performed on computer processors and computer readable storage media), the program instructions comprising: The remainder of claim 15 is claim 1 in the form of a computer system and is rejected for the same reasons as claim 1 stated above. Conclusion The prior art made of record is not relied upon is considered pertinent to applicant’s disclosure: • NPL: Chen, Ray-Bing, et al. “Finding optimal points for expensive functions using adaptive RBF-based surrogate model via uncertainty quantification.” (9 June 2020). • NPL: Tripathy, Rohit, et al. “Deep UQ: Learning deep neural network surrogate models for high dimensional uncertainty quantification.” (27 August 2018). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TRAN whose telephone number is (703)756-1525. The examiner can normally be reached M-F 9:30 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID H TRAN/Examiner, Art Unit 2147 /HASSAN MRABI/Primary Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Aug 18, 2021
Application Filed
Aug 23, 2024
Non-Final Rejection — §101, §103
Nov 29, 2024
Response Filed
Feb 26, 2025
Final Rejection — §101, §103
Apr 29, 2025
Applicant Interview (Telephonic)
Apr 29, 2025
Examiner Interview Summary
May 04, 2025
Response after Non-Final Action
Jun 03, 2025
Request for Continued Examination
Jun 04, 2025
Response after Non-Final Action
Jun 11, 2025
Non-Final Rejection — §101, §103
Sep 22, 2025
Response Filed
Jan 07, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579404
PROCESSOR FOR NEURAL NETWORK, PROCESSING METHOD FOR NEURAL NETWORK, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
14%
Grant Probability
38%
With Interview (+23.2%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month