Prosecution Insights
Last updated: April 19, 2026
Application No. 18/489,528

PREDICTIVE MODELING OF A MANUFACTURING PROCESS USING A SET OF TRAINED INVERTED MODELS

Non-Final OA §103§112
Filed
Oct 18, 2023
Examiner
SKRZYCKI, JONATHAN MICHAEL
Art Unit
2116
Tech Center
2100 — Computer Architecture & Software
Assignee
Applied Materials, Inc.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
146 granted / 221 resolved
+11.1% vs TC avg
Strong +33% interview lift
Without
With
+33.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
18 currently pending
Career history
239
Total Applications
across all art units

Statute-Specific Performance

§101
11.4%
-28.6% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
27.3%
-12.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 221 resolved cases

Office Action

§103 §112
DETAILED ACTION Claims 1-20 (filed 10/18/2023) have been considered in this action. Claims 1-20 are newly filed. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 7 and 17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 7 and 17 each recite that the sets of input data comprise at least one of different hyperparameters, different initialization values, or different training data. However, claims 1 and 11 establish the “sets of input data” as “for configuring the semiconductor device manufacturing process”, which PHOSITA would recognize as different and non-corresponding to the “hyperparameters, initialization values, or training data” of claims 7 and 17. Hyperparameters, initialization values and training data would be well-understood by PHOSITA to correspond with the different parameters utilized in the training of machine learning models as described in the instant specification ([0034], [0042], [0071]). Accordingly, the parameters utilized by the machine learning model in the course of its training are not the same as what is output by the machine learning model (respective sets of input data for configuring the semiconductor device manufacturing process). Because it is not clearly taught by the provided specification that the internal parameters, including the hyperparameters, initialization values and training data are encompassed by the sets of input data it is considered new matter, and claims 7 and 17 are rejected under 35 U.S.C. 112(a). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The terms “different hyperparameter, different initialization value or different training data” in claims 7 and 17 is used by the claim to mean “set of input data,” while the accepted meaning is “data utilized in the formulation of a machine learning model.” The term is indefinite because the specification does not clearly redefine the term. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 5-9, 11 and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over David (US 20170109646, hereinafter David) in view of Liano et al. (US 20110320386, hereinafter Liano). In regards to Claim 1, David teaches “A method comprising: receiving, by a processing device, expected output data defining an attribute of a semiconductor device manufactured by at least one semiconductor device manufacturing process performed within at least one processing chamber” ([0029] This disclosure describes new techniques for measuring and/or compensating for process variations in production runs of a semiconductor manufacturing processes, for using these techniques to predict yield at any step of the process, and for optimizing testing and burn-in procedures. For example, machine learning algorithms can be used to create new approaches to data analysis by incorporating new types of input data, and the data can be more effectively correlated, organized and pre-processed, then used to make process adjustments. Data from prior production runs can be used to create a model for a target parameter, and data from a current production run can be input to the model to generate a prediction for the target parameter, and to correlate the prediction with the actual data[0068] In step 602, a target is selected. In one embodiment, the target is an overlay measurement (e.g., IBO measurement, DBO measurement, CD-SEM, TEM, etc.) and could be a linear overlay offset in the x and y direction. The target could also be other lithography apparatus parameters that need to be controlled to minimize overlay error, such as reticle position, reticle rotation, or reticle magnification. The target could be parametric data such as on/off current of the transistor, transistor thresholds, or some other parameter that quantifies the health of the transistor. The target could also be yield information, such as the functionality of a given die or area on the wafer (sometimes measured as either pass or fail). The target could also be semiconductor device performance data; wherein the target data is expected output) “wherein the expected output data corresponds to an unexplored portion of a process space associated with the at least one semiconductor device manufacturing process” ([0088] FIG. 7 illustrates one example of collecting input data for an input feature set 710, which is a matrix 712 having a number of input parameters 712a, 712b . . . 712x, which are relevant to a specified target, which may be a measurement, a calculated parameter, or a modeled parameter. The input data may be collected during wafer fabrication, at or before wafer test and sort and/or wafer probe testing. For example, input data can be collected from the process equipment 720 during steps for etch, CMP, gap fill, blanket, RTP, etc., and may include process variables such as process duration, temperature, pressure, RF frequency, etc.... In step 802, specified input data is collected, e.g., as an input vector, then fed into the model in step 804. If some of the specified data is not present in the 1×n vector, there are a number of techniques that can replace or estimate the missing data in the input vector; wherein missing data is an unexplored portion of a process space; [0104] In an embodiment, as new input data and corresponding target data is generated, the algorithm can be retrained so as to produce a better model that will give better scores; wherein the new target data is inherently from an unexplored space, because it is new and thus has not been used for training [0105] In some embodiments, a set of algorithms can be trained simultaneously with the same input and target dataset. The algorithm that gives the best output can be selected for deployment) “wherein the expected output data corresponds to an unexplored portion of a process space associated with the at least one semiconductor device manufacturing process;” ([0088] FIG. 7 illustrates one example of collecting input data for an input feature set 710, which is a matrix 712 having a number of input parameters 712a, 712b . . . 712x, which are relevant to a specified target, which may be a measurement, a calculated parameter, or a modeled parameter. The input data may be collected during wafer fabrication, at or before wafer test and sort and/or wafer probe testing. For example, input data can be collected from the process equipment 720 during steps for etch, CMP, gap fill, blanket, RTP, etc., and may include process variables such as process duration, temperature, pressure, RF frequency, etc.... In step 802, specified input data is collected, e.g., as an input vector, then fed into the model in step 804. If some of the specified data is not present in the 1×n vector, there are a number of techniques that can replace or estimate the missing data in the input vector; wherein missing data is an unexplored portion of a process space; [0104] In an embodiment, as new input data and corresponding target data is generated, the algorithm can be retrained so as to produce a better model that will give better scores; wherein the new target data is inherently from an unexplored space, because it is new and thus has not been used for training [0105] In some embodiments, a set of algorithms can be trained simultaneously with the same input and target dataset. The algorithm that gives the best output can be selected for deployment; wherein new target data can be considered unexplored process space) “and identifying, by the processing device, expected input data by using the expected output data as input to a plurality of homogeneous inverted machine learning models” ([0068] In step 602, a target is selected. In one embodiment, the target is an overlay measurement (e.g., IBO measurement, DBO measurement, CD-SEM, TEM, etc.) and could be a linear overlay offset in the x and y direction. The target could also be other lithography apparatus parameters that need to be controlled to minimize overlay error, such as reticle position, reticle rotation, or reticle magnification. The target could be parametric data such as on/off current of the transistor, transistor thresholds, or some other parameter that quantifies the health of the transistor. [0069] In step 604, the parameters that are useful in evaluating the target are identified, and in step 606, input data relevant to the parameters is collected. Every set of input data is associated with a specific output or target. For example, a set of measured and observed values can be associated with an overlay offset. Those values would be an input vector to the model, and would be associated with the target, e.g., the measured offset. [0098] The algorithm can be a classification or regression algorithm, which are types of machine learning algorithms, but could be one of many different types of algorithms. Examples of some of these algorithms that can be used include: Decision Trees, CART (Classification and Regression Trees), C5.0, C4.5, CHAID, Support Vector Regression, Artificial Neural Networks, Perceptron, Back Propagation, Deep Learning, Ensemble, Boosting/Bagging, Random Forests, GBM (Gradient Boosting Machine), AdaBoos; wherein the models of David are inverted machine learning models, as they determine an expected configuration parameter from historical training data of machine learning models trained with previous manufacturing runs and the ensemble with bagging techniques implies plurality of homogenous models that are trained with different training data) “wherein each inverted machine learning model of the plurality of homogeneous inverted machine learning models is trained to determine, by performing linear extrapolation based on the expected output data, a respective set of input data of a plurality of sets of input data for configuring the semiconductor device manufacturing process to manufacture the semiconductor device” ([0090] In a typical situation, the score can be the overlay offset prediction, for example, an offset in the x direction or the y direction. In step 808, the score is used to determine an adjustment to be made to one or more components of the lithographic apparatus. For example, the offset data could be applied to a control system to make an adjustment to the lithography apparatus parameters or “control knobs” to adjust for the overlay error. [0092] In one embodiment, machine learning algorithms could be used with all or some of the above mentioned input data, along with CD error measurement and overlay error measurement to create a model whose target is a lithography apparatus control parameter, such as focus, power, or x-y direction control. The goal is to optimize the lithography apparatus control parameter (given a measured CD) such that the lithography apparatus output results in the best semiconductor device performance or yield. [0120] There are also a number of approaches to feature selection. One approach is implementing random forests which identify which input features are most relevant to predicting overlay error. Another technique is the CHAID decision tree, which will also identify features that are important. Linear regression is another technique. ANOVA is another technique). David fails to explicitly teach “…machine learning models is trained to determine, by performing linear extrapolation based on the expected output data, a respective set of input data…”. That is, while David suggests linear regression type mathematical feature identification for identifying parameters for configuring the semiconductor manufacturing device, it is deficient in teaching the performance of linear extrapolation for determining such parameters. Liano teaches “…machine learning models is trained to determine, by performing linear extrapolation based on the expected output data, a respective set of input data…” ([0022] For linear asymptotic behavior where .phi..sub.b(x).about.x: .phi..sub.b(x)=log(1+exp(x)). (10) [0032] The model 14 may include neural network and/or support vector machine embodiments capable of employing the techniques described herein, including asymptotic analysis techniques, capable of superior extrapolation properties that may be especially useful in control, prediction, and optimization applications. [0030] The pre-processed data 48 may then be utilized as part of an analysis of the extrapolation behavior of the system 10. The extrapolation behavior may use knowledge available about the system 10 to determine the extrapolation behavior of the system 10. In one example, the pre-processed data 48 may be analyzed and used to determine any asymptotic tendencies of the system 10. For example, techniques such as linear regression (e.g., least squares, adaptive estimation, and ridge regression), the method of dominant balance (MDB), and/or others may be used. Indeed, any number of methods useful in the asymptotic analysis of the system 10 may be employed. In cases where the asymptotic behavior of the system is not known, then a desired asymptotic behavior may be used. For example, a linear behavior may suitably control a heating process, while a constant behavior may suitably control a mixing process. Accordingly, an extrapolation behavior 50 may be found or defined). It would have been obvious to a person having ordinary skill in the art before the effective file date of the claimed invention to have modified the system that utilized multiple machine learning models to determine control parameters for a semiconductor process as taught by David, with the use of machine learning models that extrapolate solutions of unexplored/unknown space as taught by Liano, because it would provide the stated benefits of Liano, namely “[0005] in many instances it is difficult for the model to extrapolate outside of the training data. Accordingly, the extrapolation property of the model may be as important as the model's accuracy over the training dataset. Indeed, even if the resulting model exhibits a good quality of fit (i.e., high fidelity) over the training data set, this model property by itself may not be sufficient to render the model useful.”. In other words, an improved model that is able to make determinations outside the training data set would be realized by David when incorporating the features of Liano. By combining these elements, it can be considered taking the known use of a machine learning model that uses linear extrapolation of unknown space for determining control parameters, and incorporating these features into the known machine learning models that determines control parameters for semiconductor manufacturing processes in a known way that achieves predictable results. In regards to Claim 11, the servers and clients of David ([0165]) teach the recited structures. Claim 11 corresponds with a system that performs the method of claim 1, and thus claim 11 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 1. In regards to Claim 5, the combination of David and Liano teach the method as incorporated by claim 1 above. David further teaches “The method of claim 1, wherein each set of input data comprises data related to performing the semiconductor device manufacturing process that is indicative of at least one of: time, energy, temperature, voltage, gas flow rate, wafer spin speed, distance, pressure, a precursor, a reactant, or a dilutant” ([0046] The algorithm can be a supervised learning algorithm, where a model can be trained using a set of input data and measured targets. The targets can be the critical dimensions that are to be controlled. The input data can be upstream metrology measurements, or data from process equipment (such as temperatures and run times)). In regards to Claim 15, the servers and clients of David ([0165]) teach the recited structures. Claim 15 corresponds with a system that performs the method of claim 5, and thus claim 15 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 5. In regards to Claim 6, the combination of David and Liano teach the method as incorporated by claim 1 above. David further teaches “The method of claim 1, wherein the expected output data for the manufacturing process comprises one or more values that indicate a layer thickness, a layer uniformity, or a structural width of a product that will be output by the manufacturing process.” ([0046] In another example, virtual metrology can use machine learning algorithms to predict metrology metrics such as film thickness and critical dimensions (CD) without having to take actual measurements, in real-time. This can have a big impact on throughput and also lessen the need for expensive TEM or SEM x-section measurements; [0050] In yet another example, machine learning algorithms can be used to control a manufacturing process step. As noted above, virtual metrology can be used to predict a critical dimension or film thickness for a manufacturing process step. Before or during processing of this manufacturing step, the prediction can then be used to set and/or control any number of processing parameters (e.g. run time) for that processing step. For example, in the case of CMP, if virtual metrology predicts that a dielectric film thickness will be 100 Angstroms thicker than the target thickness if the wafer was to be polished at the nominal polish time, then a calculation can be made to lengthen the polish time so that the final polished thickness can be closer to the target thickness.). In regards to Claim 16, the servers and clients of David ([0165]) teach the recited structures. Claim 16 corresponds with a system that performs the method of claim 6, and thus claim 16 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 6. In regards to Claim 7, the combination of David and Liano teach the method as incorporated by claim 1 above. David further teaches “The method of claim 1, wherein each set of input data of the plurality of sets of input data comprises at least one of: a different hyperparameter, a different initialization value, or different training data” ([0085] In step 612, the data is then fed into the algorithm for training. The algorithm could be one of many different types of algorithms. Examples of machine learning algorithms include... and Ensemble, including Boosting/Bagging, Random Forests, and GBM (Gradient Boosting Machine). The best algorithm may not be a single algorithm, but can be an ensemble of algorithms; wherein bagging is a form of ensemble learning with the same architecture trained with different training data). In regards to Claim 17, the servers and clients of David ([0165]) teach the recited structures. Claim 17 corresponds with a system that performs the method of claim 7, and thus claim 17 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 7. In regards to Claim 8, the combination of David and Liano teach the method as incorporated by claim 1 above. Liano further teaches “The method of claim 1, wherein the plurality of homogeneous inverted machine learning models comprises a plurality of Feed Forward Neural Networks” ([0033] FIG. 4 depicts an embodiment of a neural network 60 capable of employing the techniques described herein. More specifically, the illustrated neural network may be trained by using the logic 40 as described above with respect to FIG. 3 to incorporate asymptotic analysis. In the illustrated embodiment, the neural network includes a plurality of input nodes (i.e., input layer) 62, a plurality of hidden nodes (i.e., hidden layer) 64, and multiple output nodes (i.e., output layer) 66. Accordingly, the neural network 60 is a multi-layer, feed-forward network with linear outputs having a single hidden layer 64. It is to be understood that while the depicted embodiment illustrates a specific neural network architecture, other architectures having more or less nodes as well as having more than one hidden layer may be used. Indeed, the techniques herein may be incorporated in any number of neural network architectures). In regards to Claim 18, the servers and clients of David ([0165]) teach the recited structures. Claim 18 corresponds with a system that performs the method of claim 8, and thus claim 18 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 8. In regards to Claim 9, the combination of David and Liano teach the method as incorporated by claim 8 above. Liano further teaches “The method of claim 8, wherein each inverted machine learning model of the plurality of homogeneous inverted machine learning models comprises an output layer and a plurality of hidden layers to model the semiconductor device manufacturing process” ([0033] FIG. 4 depicts an embodiment of a neural network 60 capable of employing the techniques described herein. More specifically, the illustrated neural network may be trained by using the logic 40 as described above with respect to FIG. 3 to incorporate asymptotic analysis. In the illustrated embodiment, the neural network includes a plurality of input nodes (i.e., input layer) 62, a plurality of hidden nodes (i.e., hidden layer) 64, and multiple output nodes (i.e., output layer) 66. Accordingly, the neural network 60 is a multi-layer, feed-forward network with linear outputs having a single hidden layer 64. It is to be understood that while the depicted embodiment illustrates a specific neural network architecture, other architectures having more or less nodes as well as having more than one hidden layer may be used. Indeed, the techniques herein may be incorporated in any number of neural network architectures) “and wherein the plurality of hidden layers comprises a polynomial function and the output layer comprises a linear activation function” ([0033] FIG. 4 depicts an embodiment of a neural network 60 capable of employing the techniques described herein. More specifically, the illustrated neural network may be trained by using the logic 40 as described above with respect to FIG. 3 to incorporate asymptotic analysis. In the illustrated embodiment, the neural network includes a plurality of input nodes (i.e., input layer) 62, a plurality of hidden nodes (i.e., hidden layer) 64, and multiple output nodes (i.e., output layer) 66. Accordingly, the neural network 60 is a multi-layer, feed-forward network with linear outputs having a single hidden layer 64. It is to be understood that while the depicted embodiment illustrates a specific neural network architecture, other architectures having more or less nodes as well as having more than one hidden layer may be used. Indeed, the techniques herein may be incorporated in any number of neural network architectures; wherein a linear function is a form of polynomial function, and all layers are activated with linear functions, including hidden and output). In regards to Claim 19, the servers and clients of David ([0165]) teach the recited structures. Claim 19 corresponds with a system that performs the method of claim 9, and thus claim 19 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 9. Claims 2-4, 10, 12-14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over David and Liano as applied to claims 1 and 11 above, and further in view of Tristan et al. (US 20190095805, hereinafter Tristan). In regards to Claim 2, the combination of David and Liano teaches the method as incorporated by claim 1 above. Liano teaches “…manufacturing process inputs defining an extrapolated solution corresponding to the unexplored portion of the process space” ([0015] Indeed, the model 14 may be capable of control, prediction, and optimization of the system 12. For example, the model 14 may be capable of process control, quality control, energy use optimization (e.g., electricity use optimization, fuel use optimization), product mix management, financial optimization, and so forth. [0027] By incorporating the asymptotic behavior of the system 10, the resulting modeled system 34 may be capable of a substantially improved extrapolation behavior, including the ability to more closely model the actual system 10. [0032] If the model 14 is deemed not suitable for use, then the logic 40 may loop to block 42 to repeat the model's training process. Indeed, the model 14 may be iteratively trained so as to achieve an accuracy, a high-order behavior, and extrapolation properties suitable for modeling the system 10. The model 14 may include neural network and/or support vector machine embodiments capable of employing the techniques described herein, including asymptotic analysis techniques, capable of superior extrapolation properties that may be especially useful in control, prediction, and optimization applications. The combination of David and Liano fail to teach “The method of claim 1, further comprising: combining, by the processing device, at least a first set of input data of the plurality of sets of input data with a second set of input data of the plurality of sets of input data to generate a set of semiconductor device manufacturing process inputs …, wherein the set of semiconductor device manufacturing process inputs comprises a plurality of candidate values; and storing, by the processing device, the set of semiconductor device manufacturing process inputs in a storage device”. Cay teaches “The method of claim 1, further comprising: combining, by the processing device, at least a first set of input data of the plurality of sets of input data with a second set of input data of the plurality of sets of input data to generate a set of semiconductor device manufacturing process inputs” ([col 3 line 63] Certain aspects and features of the present disclosure relate to optimizing a manufacturing process for an object (e.g., a physical product) using a combination of an optimization model and one or more machine learning models, such as a neural network...The recommended set of values can be the combination of values for the configurable settings that best meets a user-defined goal (e.g., a particular quality level or price point), as compared to all of the other combinations of values analyzed during the optimization process....More specifically, a computing system can execute an optimization model to identify a recommended set of values for configurable settings of a manufacturing process. Executing the optimization model can involve implementing an iterative process for maximizing or minimize an objective function) “wherein the set of semiconductor device manufacturing process inputs comprises a plurality of candidate values; and storing, by the processing device, the set of semiconductor device manufacturing process inputs in a storage device” ([col 2 line 15] Each iteration of the iterative process can include selecting a current set of candidate values for the configurable settings from within a current region of a search space defined by the optimization model, the current set of candidate values being selected for use in a current iteration of the iterative process; [col 4 line 41] during each iteration of the optimization model, the optimization model can first determine a current set of values for the configurable settings to analyze. In a typical optimization process, the optimization model may next input the current set of values to an objective function that is a predefined linear equation. But in some examples described herein, the optimization model can instead provide the current set of values as input to one or more trained machine learning models that are separate from the optimization model. The optimization model may communicate with the one or more trained machine learning models via an application programming interface (API). The trained machine learning models can receive the current set of values and generate respective output values based on the current set of values; [col 7 line 7] Network-attached data stores 110 can store data to be processed by the computing environment 114 as well as any intermediate or final data generated by the computing system in non-volatile memory. But in certain examples, the configuration of the computing environment 114 allows its operations to be performed such that intermediate and final data results can be stored solely in volatile memory (e.g., RAM), without a requirement that intermediate or final data results be stored to non-volatile types of memory (e.g., disk). It would have been obvious to a person having ordinary skill in the art before the effective file date of the claimed invention to have modified the system that determines semiconductor manufacturing parameters using an ensemble of neural networks as taught by David and Liano with the use of saving sets of candidate values for control parameters determined via machine learning that have been combined until a most optimized solution is found as taught by Cay, because it would gain the benefit of Cay, namely finding a most optimized solution that yields improvements to the manufacturing process or manufactured object ([col 4]). By combining these elements, it can be considered taking the known ability to generate sets of candidate input data for configuring a semiconductor manufacturing process and saving them to a memory, and using these features in the machine learning ensemble of David and Liano in a known way that achieves predictable results. In regards to Claim 12, the servers and clients of David ([0165]) teach the recited structures. Claim 12 corresponds with a system that performs the method of claim 2, and thus claim 12 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 2. In regards to Claim 3, the combination of David, Liano and Cay teach the method as incorporated by claim 2 above. Cay further teaches “The method of claim 2, further comprising clustering, by the processing device, the first set of input data and the second set of input data into a plurality of groups, wherein each group of the plurality of groups comprises a respective value for the first set of input data and a respective value for the second set of input data” ([col 2 line 11] The operations can include executing an optimization model to identify a recommended set of values for configurable settings of a manufacturing process associated with an object. The optimization model can be configured to determine the recommended set of values by implementing an iterative process using an objective function. Each iteration of the iterative process can include selecting a current set of candidate values for the configurable settings from within a current region of a search space defined by the optimization model, the current set of candidate values being selected for use in a current iteration of the iterative process; providing the current set of candidate values as input to a trained machine learning model that is separate from the optimization model, the trained machine learning model being configured to predict a value for a target characteristic of the object or the manufacturing process based on the current set of candidate values;[col 26 line 7] FIG. 11 is a flow chart of an example of a process for generating and using a machine learning model according to some aspects. Machine learning is a branch of artificial intelligence that relates to mathematical models that can learn from, categorize, and make predictions about data. Such mathematical models, which can be referred to as machine learning models, can classify input data among two or more classes; cluster input data among two or more groups; [col 27 line 64] In block 1112, the trained machine learning model is used to analyze the new data and provide a result. For example, the new data can be provided as input to the trained machine learning model. The trained machine learning model can analyze the new data and provide a result that includes a classification of the new data into a particular class, a clustering of the new data into a particular group, a prediction based on the new data, or any combination of these). In regards to Claim 13, the servers and clients of David ([0165]) teach the recited structures. Claim 13 corresponds with a system that performs the method of claim 3, and thus claim 13 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 3. In regards to Claim 4, the combination of David, Liano and Cay teach the method as incorporated by claim 2 above. David further teaches “The method of claim 2, wherein the plurality of candidate values comprises a range of values for the first set of input data and a range of values for the second set of input data.” ([0073] If the reflectometry data is collected by illuminating the target with unpolarized broadband light and has a detectable wavelength range of 250 nm to 850 nm, then the user could choose to sample that light from 250 nm to 850 nm at 2 nm intervals, to get a total of 301 spectral intensity measurements for that wavelength range. These 301 samples would each be an input to the algorithm. An example of how the input data is associated with a target is shown in Table III. [0090] If the target was a parametric test value, then the score will be a prediction of that parametric test value. In a typical situation, the score can be the overlay offset prediction, for example, an offset in the x direction or the y direction. In step 808, the score is used to determine an adjustment to be made to one or more components of the lithographic apparatus. For example, the offset data could be applied to a control system to make an adjustment to the lithography apparatus parameters or “control knobs” to adjust for the overlay error.[0202] In the case where a prediction can be made, that prediction may then be checked to ensure that the prediction is within acceptable bounds set by the user or the system. In the case that the prediction is beyond these bounds, the client will execute its “safe mode” action. In the case that the prediction is within the expected ranges, the prediction is delivered in accordance to the user specified manner. A log of the data, model used, and the prediction may be kept; wherein anytime there are multiple values, they inherently have a range or likewise the range is the boundaries defined by David). In regards to Claim 14, the servers and clients of David ([0165]) teach the recited structures. Claim 14 corresponds with a system that performs the method of claim 4, and thus claim 14 is rejected under 35 U.S.C. 103 using a similar analysis as applied to claim 4. In regards to Claim 10, the combination of David and Liano teach the method as incorporated by claim 1 above. The combination of David and Liano fail to teach “The method of claim 1, further comprising: providing, by the processing device for display, a plurality of candidate input value sets, wherein each candidate input value set of the plurality of candidate input value sets corresponds to the expected output data for the semiconductor device manufacturing process; receiving, by the processing device, a user selection of a candidate input value set of the plurality of candidate input value sets to obtain a selected candidate input value set; and initiating, by the processing device, a run of the semiconductor device manufacturing process using the selected candidate input value set”. Cay teaches “The method of claim 1, further comprising: providing, by the processing device for display, a plurality of candidate input value sets, wherein each candidate input value set of the plurality of candidate input value sets corresponds to the expected output data for the semiconductor device manufacturing process” ([col 32 line 28] the processing device can transmit the electronic communication over a network to a remote user device (e.g., a laptop computer, mobile phone, or tablet) associated with an operator of the manufacturing process. The user device can receive the electronic communication and responsively output the recommended set of values on a display device to the operator, who may be located on the manufacturing floor or otherwise close to a control panel associated with the manufacturing process. Based on the output, the operator can adjust the configurable settings to the recommended set of values to improve the manufacturing process. As still another example, the electronic communication can be a display signal for generating a graphical user interface on a display device, such as a touch-screen display or a liquid crystal display. The graphical user interface can include the recommended set of values. An operator of the manufacturing process can view the graphical user interface on the display device and tune the configurable settings to the recommended set of values, to improve the manufacturing process...(151) The iterative process can begin at block 1402, in which a processing device executing the optimization model can select a current set of candidate values for the configurable settings to be used in the current iteration of the iterative process. The current set of candidate values can be selected from within a current region of a search space defined by the optimization model; [col 2 line 23] the trained machine learning model being configured to predict a value for a target characteristic of the object or the manufacturing process based on the current set of candidate values) “receiving, by the processing device, a user selection of a candidate input value set of the plurality of candidate input value sets to obtain a selected candidate input value set” ([col 22 line 47] a user may interact with one or more user interface windows presented to the user in a display under control of the ESPE independently or through a browser application in an order selectable by the user. For example, a user may execute an ESP application, which causes presentation of a first user interface window, which may include a plurality of menus and selectors such as drop down menus, buttons, text boxes, hyperlinks, etc. associated with the ESP application as understood by a person of skill in the art) “and initiating, by the processing device, a run of the semiconductor device manufacturing process using the selected candidate input value set” ([col 4 line 5] The optimization model and the machine learning models can cooperate with one another to determine a recommended set of values for configurable settings of the manufacturing process. The recommended set of values can be the combination of values for the configurable settings that best meets a user-defined goal (e.g., a particular quality level or price point), as compared to all of the other combinations of values analyzed during the optimization process. In some examples, the recommended set of values can be the optimal set of values as determined by the optimization process. The recommended set of values can then be applied to the manufacturing process, which can yield significant improvements to the manufacturing process or the manufactured object). It would have been obvious to a person having ordinary skill in the art before the effective file date of the claimed invention to have modified the system which determines control parameters for a semiconductor manufacturing process using an ensemble of machine learning models as taught by David and Liano, with the use of a user interface that allows user selection of candidate values and which optimizes the candidates until a user-defined goal is achieved as taught by Cay, because it would gain the obvious benefit of allowing a user interface for control and testing capabilities of different candidate parameters, thus improving the user experience. By combining these elements, it can be considered taking the known display methods of Cay, and applying them to David in a known way that achieves predictable results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M SKRZYCKI whose telephone number is (571)272-0933. The examiner can normally be reached M-Th 7:30-3:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ken Lo can be reached at 571-272-9774. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN MICHAEL SKRZYCKI/Examiner, Art Unit 2116
Read full office action

Prosecution Timeline

Oct 18, 2023
Application Filed
Jan 27, 2026
Non-Final Rejection — §103, §112
Mar 23, 2026
Applicant Interview (Telephonic)
Mar 23, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595886
CONTROL OF A WATER SUPPLY SYSTEM USING PUMPING STATIONS WITH RESOURCE OPTIMIZED PRESSURE AND FLOW TARGET VALUES
2y 5m to grant Granted Apr 07, 2026
Patent 12570003
SYSTEMS, AND METHODS FOR REAL TIME CALIBRATION OF MULTIPLE RANGE SENSORS ON A ROBOT
2y 5m to grant Granted Mar 10, 2026
Patent 12562352
PREDICTION METHOD AND INFORMATION PROCESSING APPARATUS FOR PREDICTING THE PROCESS RESULT IN A PLASMA ETCHING PROCESS
2y 5m to grant Granted Feb 24, 2026
Patent 12560918
PRODUCTION SEQUENCING OPTIMIZATION FOR AUTOMOTIVE ACCESSORY INSTALLATION
2y 5m to grant Granted Feb 24, 2026
Patent 12530014
PROCESS MODEL AUTOMATIC GENERATION SYSTEM AND PROCESS MODEL AUTOMATIC GENERATION METHOD
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
99%
With Interview (+33.1%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 221 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month