Prosecution Insights
Last updated: April 19, 2026
Application No. 18/038,041

COMPUTER SURROGATE MODEL TO PREDICT THE SINGLE-PHASE MIXING QUALITY IN STEADY STATE MIXING TANKS

Non-Final OA §103
Filed
May 22, 2023
Examiner
RUTTEN, JAMES D
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Amgen, Inc.
OA Round
1 (Non-Final)
63%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
365 granted / 580 resolved
+7.9% vs TC avg
Strong +38% interview lift
Without
With
+38.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
23 currently pending
Career history
603
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 580 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 have been examined. Claim Objections Claim 1 is objected to because of the following informalities: the phrase “based on” at the end of line 13 is repeated. Appropriate correction is required. Claims 7 and 15 are objected to for similar reasons noted above with respect to claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 7, 9, 11-12 and 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 20080228680 by Chen et al. ("Chen") in view of “Data-driven surrogate modeling and benchmarking for process equipment” by Gonçalves et al. (“Goncalves”). In regard to claim 1, Chen discloses: 1. A method, comprising: See at least Fig. 4, broadly depicting a method. generating, by one or more processors, a plurality of training computational fluid dynamic (CFD) models for a plurality of training … configurations …, wherein each training CFD model is generated based on a plurality of … factors associated with each training … configuration;. See Chen ¶ 0026, “Usage of high-fidelity simulation tools such as Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD), for example, has become standard practice in engineering today.” … ¶ 0053, “The FEA model takes the four parameters in the design space as input variables, and performs a simulation to measure the resulting plastic strain and tensile load at the given expansion rate.” Also ¶ 0056, “Process 412 begins with the engineer obtaining a sparse data set from the high-fidelity tool model.” Chen does not expressly disclose: steady state mixing configurations in which inlet streams are mixed in tanks, … training CFD model is generated based on steady state mixing factors associated with each training steady state mixing configuration; This is taught by Goncalves. Goncalves, p. 7, last paragraph of section 2.3, “All simulations utilized second-order discretization schemes for the spatial terms, and steady state was assumed.” Also p. 10, section 3.3 “Case 3: flow in an inline mixer (2D)”: For this system, the power number (per nondimensional length Le=d) Np,2D is chosen as the main variable of interest. It is given by:[AltContent: textbox (N2d4)] Np,2D = 2πT2D (19) [AltContent: textbox (¼)] N2d4 where T2D is the torque applied per fluid mass and N is the rotation speed. Through dimensional analysis, it can be shown that this value is a function of the Reynolds number, Re ( d2πNð2-nÞd2=k), nondimensional gap between rotor and stator α ( D d =D), flow index n, and number of blades Nb. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Goncalves’s mixing configuration/factors in Chen’s CFD in order to reproduce key predictions of a complete CFD simulation at a fraction of the computational cost for commonly used industrial equipment as suggested by Goncalves (see p. 3, section 2 and p. 7, section 3). calculating, by the one or more processors, a mixing quality for each training steady state mixing configuration using each respective training CFD model; Chen ¶ 0053, “measure the resulting plastic strain and tensile load at the given expansion rate.” Also ¶ 0060, “the computer obtains a pool of unique neural networks that each perform adequately over their respective training sets.” Also see Goncalves, p. 8, top paragraph, “To determine the mixing performance, cv at the outlet is calculated, which corresponds to the variance of the tracer concentration at the outlet … where the average concentration, c, is computed over the outlet area.” generating, by the one or more processors, a training dataset that includes the steady state mixing factors associated with each training steady state mixing configuration, and the calculated mixing quality for each training steady state mixing configuration; Chen, ¶ 0061, “the computer formulates a diverse set of evolutionary selection parameters to form a pool of candidate ensembles.” Also ¶ 0067, “To this point (block 424 of FIG. 4), the neural network training and ensemble selection have been performed using the primary data set. In block 426, the secondary data set is used to select local ensembles from the pool of neural network ensembles developed in block 424.” training, by the one or more processors, a machine learning model, using the training dataset, to predict mixing qualities for steady state mixing configurations based on based on steady state mixing factors associated with the steady state mixing configurations; Chen ¶ 0060, “Returning again to FIG. 4, the computer trains a set of neural networks in block 420, varying the training parameters for each network.” recommending, by the one or more processors, … for a given product based on the trained machine learning model. Chen, Fig. 4, elements 406, 408, 430 and 432, depicting implementation using a recommended design based upon the trained ensemble. Also ¶ 0054, “The computer displays the optimal solution to the engineer in block 410 for use in implementing the tool.” Chen does not expressly disclose: one or more of a working volume or an impeller speed. This is taught by Goncalves, p. 10, section 3.3, “The impeller rotates in the counterclockwise direction around the z-axis … where T₂D is the torque applied per fluid mass and N is the rotation speed.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Goncalves’s impeller speed with Chen’s recommended design in order to provide design optimization as suggested by Goncalves (see p. 1, section 1). In regard to claim 2, Chen does not expressly disclose: 2. The method of claim 1, further comprising: applying, by the one or more processors, the trained machine learning model to new steady state mixing factors associated with a new steady state mixing configuration; and predicting, by the one or more processors, based on applying the trained machine learning model to the steady state mixing factors associated with the new steady state mixing configuration, a mixing quality for the new steady state mixing configuration. This is taught by Goncalves, section 3, discussing multiple mixing arrangements. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Goncalves’s multiple/new mixing arrangements with Chen’s simulations and predictions in order to test and benchmark new algorithms for active learning for a class of problems as suggested by Goncalves (see p. 1, Impact Statement). In regard to claim 3, Chen and Goncalves also teach: 3. The method of claim 1, wherein the steady state mixing factors include one or more of: tank geometry, stirrer geometry, working volume, inlet configuration, outlet configuration, inlet flow rates for each inlet, outlet flow rates for each outlet, agitation speed, impeller speed, fluid Reynolds number for each substance, and other chemical and pharmaceutical properties for each substance. Goncalves, p. 10, section 3.3, “The impeller rotates in the counterclockwise direction around the z-axis … where T₂D is the torque applied per fluid mass and N is the rotation speed.” In regard to claim 4, Chen and Goncalves also teach: 4. The method of claim 1, wherein the mixing quality is a measure of standard deviation of trace concentration in the tank. See Goncalves, p. 8, section 3.1, “To determine the mixing performance, cv at the outlet is calculated, which corresponds to the variance of the tracer concentration at the outlet.” Also see p. 20, “The variational method, based on estimates of standard deviation provided by the GP, presented the best performance on Cases 1 and 3 …” In regard to claim 7, Chen discloses: 7. A computer system, comprising: one or more processors; and a non-transitory program memory communicatively coupled to the one or more processors and storing executable instructions that, when executed by the one or more processors, cause the processors to: See Chen, at least Fig. 1, along with ¶ 0047 “The engineer's tools include a computer 104 and software (represented by removable storage media 106), which they control via one or more input devices 108 and output devices 110. The software is stored in the computer's internal memory for execution by one or more processors. The software configures the processor to accept commands and data from the engineer, to process the data in accordance with one or more of the methods disclosed below, and to responsively provide predictions for the performance of the tool being developed or improved.” All further limitations of claim 7 have been addressed in the above rejection of claim 1. In regard to claims 9 and 11-12, parent claim 7 is addressed above. All further limitations of claims 9 and 11-12 have been addressed in the above rejections of claims 2-4, respectively. In regard to claim 15, Chen discloses: 15. A non-transitory computer readable storage medium storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to: See Chen, at least Fig. 1, along with ¶ 0047 “The engineer's tools include a computer 104 and software (represented by removable storage media 106), which they control via one or more input devices 108 and output devices 110. The software is stored in the computer's internal memory for execution by one or more processors. The software configures the processor to accept commands and data from the engineer, to process the data in accordance with one or more of the methods disclosed below, and to responsively provide predictions for the performance of the tool being developed or improved.” All further limitations of claim 15 have been addressed in the above rejection of claim 1. In regard to claims 16-18, parent claim 15 is addressed above. All further limitations of claims 16-18 have been addressed in the above rejections of claims 2-4, respectively. Claim(s) 5, 13 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Goncalves as addressed above, and further in view of U.S. Patent Application Publication 20190005187 by Costello et al. ("Costello"). In regard to claim 5, Chen does not expressly disclose: 5. The method of claim 1, further comprising: generating, by the one or more processors, a testing computational fluid dynamic (CFD) model for a testing steady state mixing configuration in which inlet streams are mixed in tanks, wherein the testing CFD model is generated based on a plurality of steady state mixing factors associated with the testing steady state mixing configuration; calculating, by the one or more processors, a mixing quality for the testing steady state mixing configuration using the testing CFD model; applying, by the one or more processors, the trained machine learning model to the steady state mixing factors associated with the testing steady state mixing configuration; predicting, by the one or more processors, based on applying the trained machine learning model to the steady state mixing factors associated with the testing steady state mixing configuration, a quality of mixing for the testing steady state mixing configuration; and evaluating, by the one or more processors, the trained machine learning model by comparing the mixing quality calculated for the testing steady state mixing configuration using the testing CFD model and the mixing quality predicted for the testing steady state mixing configuration using the trained machine learning model. These steps correspond with a test run which essentially corresponds with the prior training which is addressed above in the rejection of parent claim 1. Chen does not expressly disclose testing. This is taught by Costello ¶ 0015, “Cross validation techniques trained multiple models with a subset of the training data, and then test these model on data not used for training.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Costello’s cross validation with the training of Chen and Goncalves in order to test model performance as suggested by Costello. In regard to claim 13, parent claim 7 is addressed above. All further limitations of claim 13 have been addressed in the above rejection of claim 5. In regard to claim 19, parent claim 15 is addressed above. All further limitations of claim 19 have been addressed in the above rejection of claim 5. Claim(s) 6, 14 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Goncalves as addressed above, and further in view of U.S. Patent Application Publication 20200082041 by Albert et al. ("Albert"). In regard to claim 6, Chen does not expressly disclose: 6. The method of claim 1, wherein the machine learning model is a deep learning model. This is taught by Albert ¶ 0055, “deep learning.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Albert’s deep learning with Chen’s learning models in order to utilize modular, ultra-fast, scalable models as suggested by Albert. In regard to claim 14, parent claim 7 is addressed above. All further limitations of claim 14 have been addressed in the above rejection of claim 6. In regard to claim 20, parent claim 15 is addressed above. All further limitations of claim 20 have been addressed in the above rejection of claim 6. Claim(s) 8 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Goncalves as addressed above, and further in view of U.S. Patent 10600005 to Gunes et al. ("Gunes"). In regard to claim 8, Chen does not expressly disclose: 8. The computer system of claim 7, wherein a first set of one or more processors, of the one or more processors, generate the plurality of training computational fluid dynamic (CFD) models, and wherein a second set of one or more processors, of the one or more processors, train the machine learning model. This is taught by Gunes, col. 8, lines 9-12, “Model training device 100 may coordinate access to training dataset 124 and validation dataset 126 that are distributed across distributed computing system 128 that may include one or more computing devices.” Also col. 22, lines 55-59, “Although some of the operational flows are presented in sequence, the various operations may be performed in various repetitions, concurrently (in parallel, for example, using threads and/or a distributed computing system), and/or in other orders than those that are illustrated.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Gunes’ distributed computing in order to execute operations in parallel (which save execution time) as suggested by Gunes. In regard to claim 10, Chen does not expressly disclose: 10. The computer system of claim 9, wherein a third set of one or more processors, of the one or more processors, apply the trained machine learning model to the new steady state mixing factors associated with the new steady state mixing configuration and predict the mixing quality for the new steady state mixing configuration. This is taught by Gunes, col. 8, lines 9-12, “Model training device 100 may coordinate access to training dataset 124 and validation dataset 126 that are distributed across distributed computing system 128 that may include one or more computing devices.” Also col. 22, lines 55-59, “Although some of the operational flows are presented in sequence, the various operations may be performed in various repetitions, concurrently (in parallel, for example, using threads and/or a distributed computing system), and/or in other orders than those that are illustrated.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use Gunes’ distributed computing in order to execute operations in parallel (which save execution time) as suggested by Gunes. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to James D Rutten whose telephone number is (571)272-3703. The examiner can normally be reached M-F 9:00-5:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at (571)272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /James D. Rutten/Primary Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

May 22, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579423
SYSTEMS AND METHODS FOR PREDICTING BIOLOGICAL RESPONSES
2y 5m to grant Granted Mar 17, 2026
Patent 12555004
PATH-SUFFICIENT EXPLANATIONS FOR MODEL UNDERSTANDING
2y 5m to grant Granted Feb 17, 2026
Patent 12541707
METHOD AND SYSTEM FOR DEVELOPING A MACHINE LEARNING MODEL
2y 5m to grant Granted Feb 03, 2026
Patent 12510888
Model Reduction and Training Efficiency in Computer-Based Reasoning and Artificial Intelligence Systems
2y 5m to grant Granted Dec 30, 2025
Patent 12511577
DETERMINING AVAILABILITY OF NETWORK SERVICE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
63%
Grant Probability
99%
With Interview (+38.4%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 580 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month