Prosecution Insights
Last updated: April 19, 2026
Application No. 17/994,089

COMPUTING DEVICE AND METHOD GENERATING OPTIMAL INPUT DATA

Non-Final OA §103§112
Filed
Nov 25, 2022
Examiner
HANN, JAY B
Art Unit
2186
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
95%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
281 granted / 463 resolved
+5.7% vs TC avg
Strong +34% interview lift
Without
With
+34.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
31 currently pending
Career history
494
Total Applications
across all art units

Statute-Specific Performance

§101
21.5%
-18.5% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
13.7%
-26.3% vs TC avg
§112
24.9%
-15.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 463 resolved cases

Office Action

§103 §112
DETAILED ACTION Claims 1-20 are presented for examination. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings received on 25 November 2022 are accepted. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The current title is “COMPUTING DEVICE AND METHOD GENERATING OPTIMAL INPUT DATA.” The following title is suggested: “Correlation-Based Input Parameter Selection for Machine Learning.” Claim Rejections - 35 USC § 112(a) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-12 and 14-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, because the specification, while being enabling for correlation-based selection of input data (see Specification [0101]-[0102]), does not reasonably provide enablement for all possible means of generating optimal input data for all reasonable yet differing definitions for ‘optimal’ in this context. The Specification does not enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the invention commensurate in scope with these claims. Claim 13 is found to cure the noted deficiency by limiting the scope of the generation of the optimal input data to a generation which is in accordance with a correlation existing between input parameter groups. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 2, and 4-16 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 1 recites “and generate the optimal input data in relation to essential input data associated with the essential input parameter and the sample output data.” The claim term “optimal input data” is interpreted in light of Specification [0032] “Here, the term ‘optimal input data’ may be understood as data that must be applied (e.g., required input data) to the design simulator in order to obtain the target output data using the design simulator.” However, claim 1 fails to provide antecedent basis for any simulator. Accordingly, it is unclear what the lexicographic definition of the claim term “optimal input data” means in a context without a simulator. Dependent claims 2 and 4-16 are rejected for depending from a rejected claim. Claims 3, 17, and 20 provide antecedent basis for a simulator curing the noted indefiniteness. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, 9, 10, 12-14, 17, 18, and 20 Claims 1-6, 9, 10, 12-14, 17, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0374784 A1 Sharma, et al. [herein “Sharma”] in view of US patent 11,348,017 B1 Wong, et al. [herein “Wong”]. Claim 1 recites “1. A computing device comprising: a processor; and a memory storing instructions.” Sharma paragraph 310 disclose “Computer system 1500 includes a processor subsystem 1520 that is coupled to a system memory 1540.” Claim 1 further recites “wherein the processor is configured to executed the instructions to generate training data that provides output data associated with output parameters in response to input data associated with input parameters, the training data includes sample input data and sample output data.” Sharma paragraph 37 disclose “training dataset 110 includes data samples 112A-112N, each of which includes a corresponding feature vector 202A-202N and a label 204A-204N (respectively).” Sharma figure 2 shows a “Training Dataset” (110) includes data samples, feature vectors, and labels. Sharma does not explicitly disclose the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 1 further recites “and the processor is configured to select an essential input parameter from input parameters in accordance with an estimation model trained using the training data.” Sharma paragraph 19 discloses “In the context of data science and machine learning, ‘feature-selection’ refers to the process of selecting, from a set of available features, a subset of features to use in the construction of a predictive model (such as a machine learning model).” Sharma paragraph 48 discloses: the multivariate effect optimization model 104A performs a relevancy evaluation 402 to evaluate the relevancy between two or more of the features 302 and the set of labels 204 for the corresponding data samples 112. That is, in various embodiments, the relevancy evaluation 402 computes, for each of the features 302 and for groups of two or more features 302, the correlation between that particular feature 302 (or group of two or more features 302) and the label C. In the embodiment depicted in FIG. 4, the relevancy evaluation 402 is performed based on the set of binary variables XA-XM, the vector C, and the matrix D of feature vectors 202 for the training data samples 112 in the training dataset 110 The relevancy evaluation of the multivariate effect optimization model 104A corresponds with a selection of parameters in accordance with an estimation model trained using the training data. The multivariate effect optimization model 104A is not itself the estimation model. Sharma paragraph 183 teaches “training, by the computer system, a first machine learning model based on the updated training dataset.” Sharma paragraph 189 disclose “the performance of a machine learning model trained based on the candidate feature set is tested, and performance feedback information is utilized in selecting a new candidate feature set.” Using the performance of a machine learning model trained based on the candidate feature set to select a new candidate feature set is using the machine learning model as an estimation model for selecting the new set of essential input parameters. Claim 1 further recites “and generate the optimal input data in relation to essential input data associated with the essential input parameter and the sample output data.” The claim term “optimal input data” is interpreted in light of Specification [0032] “Here, the term ‘optimal input data’ may be understood as data that must be applied (e.g., required input data) to the design simulator in order to obtain the target output data using the design simulator.” Sharma paragraph 59 discloses “the computer system processes the training dataset based on an optimization model (e.g., multivariate effect optimization model 104A) to select, from the plurality of features, a subset of features to include in a reduced feature set.” The selected reduced feature set corresponds with generated optimal input data. The reduced input set corresponds with data which is required or essential input data. Sharma does not explicitly disclose the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 2 further recites “2. The computing device of claim 1, wherein the processor is further configured to determine whether pre-generated training data exists, and if the pre-generated training data exists, set the pre-generated training data as the training data, else generate the training data in relation to training data generation conditions.” Sharma figures 1 and 2 show “training dataset 110.” This training dataset is either pre-existing or not pre-existing. But Sharma does not explicitly disclose whether or not the training dataset is pre-existing or not; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 10-12 teaches “a repository of prior simulations comprises a plurality of data points, each associated with a prior simulation.” A repository of prior simulations is pre-generated training data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use simulation input and output features data points into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 3 further recites “3. The computing device of claim 2, wherein the processor is further configured to generate the training data by executing a design simulation using the design simulator in relation to the training data generation conditions.” Sharma does not explicitly disclose a design simulation using a design simulator for training; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 10-12 teaches “a repository of prior simulations comprises a plurality of data points, each associated with a prior simulation.” A repository of prior simulations is data generated by executing a design simulation of a design simulator. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use simulation input and output features data points into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 4 further recites “4. The computing device of claim 2, wherein the training data generation conditions include at least one of design simulator address data, name data, range data, adjustment data, condition data, weight data, conversion data, and sampling data.” From the above list of alternatives the Examiner is selecting “sampling data.” Sharma figure 2 shows training dataset including data samples. The data samples are sampling data. Furthermore, Sharma paragraph 27 disclose “a feature ranking-based optimization model that, in some embodiments, generates weighting values that indicate a relative ranking of the importance of the features available for selection in a reduced feature set.” Claim 5 further recites “5. The computing device of claim 4, wherein the essential input parameter has a corresponding non-zero input weight.” Sharma figure 3E and paragraph 46 discloses “In FIG. 3E, for example, the first entry in vector X (e.g., X[0]) is ‘0,’ indicating that corresponding feature 302A is not to be included in the reduced feature set, the second entry in vector X (e.g., X[l]) is ‘1,’ indicating that corresponding feature 302B is to be included in the reduced feature set.” A weighing value of 1 indicating that corresponding feature is to be included in the reduced feature set corresponds with a non-zero input weight indicating that the parameter is one of the essential input parameters. The reduced feature set corresponds with the essential input parameters. Claim 6 further recites “6. The computing device of claim 1, wherein the processor is further configured to train the estimation model using the sample input data and the sample output data, on which weight data is reflected, and select the essential input parameter.” Sharma paragraph 183 teaches “training, by the computer system, a first machine learning model based on the updated training dataset.” Sharma paragraph 189 disclose “the performance of a machine learning model trained based on the candidate feature set is tested, and performance feedback information is utilized in selecting a new candidate feature set.” Using the performance of a machine learning model trained based on the candidate feature set to select a new candidate feature set is using the machine learning model as an estimation model for selecting the new set of essential input parameters. Sharma paragraph 27 disclose “a feature ranking-based optimization model that, in some embodiments, generates weighting values that indicate a relative ranking of the importance of the features available for selection in a reduced feature set.” Claim 9 further recites “9. The computing device of claim 6, wherein the estimation model includes estimation blocks, wherein each estimation block among the estimation blocks provides an output parameter from among the output parameters in response to the input parameters.” The claim language “estimation blocks” are interpreted in light of Specification ¶58. Sharma paragraph 65 lists various “suitable types of machine learning models.” Sharma paragraph 65 discloses “recurrent neural network (‘RNN’).” But Sharma does not explicitly disclose the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 11 lines 11-23 teach: generating the trained machine learning model further comprises providing 205, by a simulation settings prediction system 101 and as input to a machine learning model, each weighted input feature vector and associated output feature vector of the plurality of simulation results data structures, the machine learning model configured to generate a predicted simulation settings input feature vector based on the plurality of weighted simulation input feature vectors and associated output feature vectors, the predicted simulation settings input feature vector representative of simulator settings for a future simulation that will result in a simulation output that achieves a desired optimization. Accordingly, the output feature vector is in response to the respective input feature vector. The trained machine learning model as a whole is at least one estimation block. Alternatively, each weighing of each input corresponds to a block of the machine learning model. See also Wong column 6 lines 15-22. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 10 further recites “10. The computing device of claim 9, wherein the processor is further configured to train the estimation blocks, such that each estimation block among the estimation blocks provides an output parameter from among the output parameters in response to the input parameters corresponding to the sample input data.” Wong column 8 lines 54-64 disclose: A machine learning model is initially fit or trained on a training dataset (e.g., a set of examples used to fit the parameters of the model). The model can be trained on the training dataset using supervised or unsupervised learning. The model is run with the training dataset and produces a result, which is then compared with a target, for each input vector in the training dataset. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 12 further recites “12. The computing device of claim 9, wherein each of the input parameters is classified into an input parameter group from among a plurality of input parameter groups, each of the output parameters is classified into an output parameter group from among a plurality of output parameter groups respectively corresponding to the input parameter groups, and each of the estimation blocks is classified into an estimation block group from among a plurality of estimation block groups respectively corresponding to the plurality of output parameter groups.” The claim language “estimation blocks” are interpreted in light of Specification ¶58 which state “nth each of the first through estimation blocks 210_1 through 210_n may have a structure corresponding to any one of a deep neural network (DNN), a convolution neural network (CNN), a recurrent neural network (RNN), etc.” Sharma paragraph 65 lists various “suitable types of machine learning models.” Sharma paragraph 65 discloses: an ANN may be implemented using a recurrent neural network ("RNN"), such as a long short-term memory ("LSTM") model. In further embodiments, an ANN may be implemented using an architecture that includes one or more layers of a feed-forward architecture and one or more layers of an RNN architecture. Layers of a RNN architecture correspond with input output parameter groups and corresponding estimation blocks. But Sharma does not explicitly disclose the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 11 lines 11-23 teach: generating the trained machine learning model further comprises providing 205, by a simulation settings prediction system 101 and as input to a machine learning model, each weighted input feature vector and associated output feature vector of the plurality of simulation results data structures, the machine learning model configured to generate a predicted simulation settings input feature vector based on the plurality of weighted simulation input feature vectors and associated output feature vectors, the predicted simulation settings input feature vector representative of simulator settings for a future simulation that will result in a simulation output that achieves a desired optimization. Accordingly, the output feature vector is in response to the respective input feature vector. The trained machine learning model as a whole is at least one estimation block. Alternatively, each weighing of each input corresponds to a block of the machine learning model. See also Wong column 6 lines 15-22. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 13 further recites “13. The computing device of claim 12, wherein the processor is further configured to sequentially generate the optimal input data, if no correlation exists between the input parameter groups.” Sharma paragraph 48 discloses: the multivariate effect optimization model 104A performs a relevancy evaluation 402 to evaluate the relevancy between two or more of the features 302 and the set of labels 204 for the corresponding data samples 112. That is, in various embodiments, the relevancy evaluation 402 computes, for each of the features 302 and for groups of two or more features 302, the correlation between that particular feature 302 (or group of two or more features 302) and the label C. In the embodiment depicted in FIG. 4, the relevancy evaluation 402 is performed based on the set of binary variables XA-XM, the vector C, and the matrix D of feature vectors 202 for the training data samples 112 in the training dataset 110 The correlation between features or group of features is determining whether a correlation exists between input parameter groups. Sharma paragraph 66 discloses “compare a performance of the first and second machine learning models and, based on this comparison, select either the reduced feature set or the second reduced feature set as a final feature set for the training dataset (e.g., training dataset 110).” The final feature set corresponds with generating optimal input data parameters. Claim 13 further recites “else the processor is further configured to recursively generate the optimal input data, if a correlation exists between the input parameter groups.” Sharma paragraph 65 discloses: an ANN may be implemented using a recurrent neural network ("RNN"), such as a long short-term memory ("LSTM") model. In further embodiments, an ANN may be implemented using an architecture that includes one or more layers of a feed-forward architecture and one or more layers of an RNN architecture. A recurrent neural network (RNN) is a recursive generation. Sharma paragraph 66 discloses “compare a performance of the first and second machine learning models and, based on this comparison, select either the reduced feature set or the second reduced feature set as a final feature set for the training dataset (e.g., training dataset 110).” Comparing the performance includes determining whether a correlation exists between input parameter groups according to the multivariate effect optimization model 104A discussed above. Claim 14 further recites “14. The computing device of claim 1, wherein the processor is further configured to retrain the estimation model in accordance with the essential input data and the sample output data, and generate recommendation input data in accordance with an acquisition function using the estimation model following retraining of the estimation model.” Sharma paragraph 189 disclose “the performance of a machine learning model trained based on the candidate feature set is tested, and performance feedback information is utilized in selecting a new candidate feature set.” Training a machine learning model based on the candidate feature set corresponds with a retraining of the estimation model in accordance with the essential input data. Selecting a new candidate feature set corresponds with generating recommendations following the retraining. Sharma paragraph 190 discloses “the feedback-assisted optimization model 104D is provided as follows: [equation (4)].” The feedback-assisted optimization model equation corresponds with an acquisition function used to help generate the recommendation output. Sharma does not explicitly disclose output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 17 recites “17. A method of generating optimal input data for a design simulator providing output data related to output parameters in response to input data related to input parameters.” The claim term “optimal input data” is interpreted in light of Specification [0032] “Here, the term ‘optimal input data’ may be understood as data that must be applied (e.g., required input data) to the design simulator in order to obtain the target output data using the design simulator.” Sharma paragraph 59 discloses “the computer system processes the training dataset based on an optimization model (e.g., multivariate effect optimization model 104A) to select, from the plurality of features, a subset of features to include in a reduced feature set.” The selected reduced feature set corresponds with generated optimal input data. The reduced input set corresponds with data which is required or essential input data. Sharma does not explicitly disclose a simulator or the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 17 further recites “the method comprising: generating training data including sample input data and sample output data.” Sharma paragraph 37 disclose “training dataset 110 includes data samples 112A-112N, each of which includes a corresponding feature vector 202A-202N and a label 204A-204N (respectively).” Sharma figure 2 shows a “Training Dataset” (110) includes data samples, feature vectors, and labels. Sharma does not explicitly disclose the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 17 further recites “selecting at least one essential input parameter affecting a plurality of output parameters from among the input parameters in accordance with an estimation model trained using the training data.” Sharma paragraph 19 discloses “In the context of data science and machine learning, ‘feature-selection’ refers to the process of selecting, from a set of available features, a subset of features to use in the construction of a predictive model (such as a machine learning model).” Sharma paragraph 48 discloses: the multivariate effect optimization model 104A performs a relevancy evaluation 402 to evaluate the relevancy between two or more of the features 302 and the set of labels 204 for the corresponding data samples 112. That is, in various embodiments, the relevancy evaluation 402 computes, for each of the features 302 and for groups of two or more features 302, the correlation between that particular feature 302 (or group of two or more features 302) and the label C. In the embodiment depicted in FIG. 4, the relevancy evaluation 402 is performed based on the set of binary variables XA-XM, the vector C, and the matrix D of feature vectors 202 for the training data samples 112 in the training dataset 110 The relevancy evaluation of the multivariate effect optimization model 104A corresponds with a selection of parameters in accordance with an estimation model trained using the training data. The multivariate effect optimization model 104A is not itself the estimation model. Sharma paragraph 183 teaches “training, by the computer system, a first machine learning model based on the updated training dataset.” Sharma paragraph 189 disclose “the performance of a machine learning model trained based on the candidate feature set is tested, and performance feedback information is utilized in selecting a new candidate feature set.” Using the performance of a machine learning model trained based on the candidate feature set to select a new candidate feature set is using the machine learning model as an estimation model for selecting the new set of essential input parameters. Claim 17 further recites “and generating the optimal input data in accordance with essential input data corresponding to the at least one essential input parameter and the sample output data.” The claim term “optimal input data” is interpreted in light of Specification [0032] “Here, the term ‘optimal input data’ may be understood as data that must be applied (e.g., required input data) to the design simulator in order to obtain the target output data using the design simulator.” Sharma paragraph 59 discloses “the computer system processes the training dataset based on an optimization model (e.g., multivariate effect optimization model 104A) to select, from the plurality of features, a subset of features to include in a reduced feature set.” The selected reduced feature set corresponds with generated optimal input data. The reduced input set corresponds with data which is required or essential input data. Sharma does not explicitly disclose the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Dependent claim 18 is substantially similar to claim 6 above and is rejected for the same reasons. Claim 20 recites “20. Non-transitory storage medium, when executed by at least one processor, storing instructions for the at least one processor to perform.” Sharma paragraph 310 disclose “Computer system 1500 includes a processor subsystem 1520 that is coupled to a system memory 1540.” Claim 20 further recites “a method generating optimal input data for a design simulator providing output data related to output parameters in response to input data related to input parameters.” The claim term “optimal input data” is interpreted in light of Specification [0032] “Here, the term ‘optimal input data’ may be understood as data that must be applied (e.g., required input data) to the design simulator in order to obtain the target output data using the design simulator.” Sharma paragraph 59 discloses “the computer system processes the training dataset based on an optimization model (e.g., multivariate effect optimization model 104A) to select, from the plurality of features, a subset of features to include in a reduced feature set.” The selected reduced feature set corresponds with generated optimal input data. The reduced input set corresponds with data which is required or essential input data. Sharma does not explicitly disclose a simulator or the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 20 further recites “wherein the method comprises: generating training data including sample input data and sample output data.” Sharma paragraph 37 disclose “training dataset 110 includes data samples 112A-112N, each of which includes a corresponding feature vector 202A-202N and a label 204A-204N (respectively).” Sharma figure 2 shows a “Training Dataset” (110) includes data samples, feature vectors, and labels. Sharma does not explicitly disclose the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Claim 20 further recites “selecting at least one essential input parameter affecting a plurality of output parameters from among a plurality of input parameters in accordance with an estimation model trained using the training data.” Sharma paragraph 19 discloses “In the context of data science and machine learning, ‘feature-selection’ refers to the process of selecting, from a set of available features, a subset of features to use in the construction of a predictive model (such as a machine learning model).” Sharma paragraph 48 discloses: the multivariate effect optimization model 104A performs a relevancy evaluation 402 to evaluate the relevancy between two or more of the features 302 and the set of labels 204 for the corresponding data samples 112. That is, in various embodiments, the relevancy evaluation 402 computes, for each of the features 302 and for groups of two or more features 302, the correlation between that particular feature 302 (or group of two or more features 302) and the label C. In the embodiment depicted in FIG. 4, the relevancy evaluation 402 is performed based on the set of binary variables XA-XM, the vector C, and the matrix D of feature vectors 202 for the training data samples 112 in the training dataset 110 The relevancy evaluation of the multivariate effect optimization model 104A corresponds with a selection of parameters in accordance with an estimation model trained using the training data. The multivariate effect optimization model 104A is not itself the estimation model. Sharma paragraph 183 teaches “training, by the computer system, a first machine learning model based on the updated training dataset.” Sharma paragraph 189 disclose “the performance of a machine learning model trained based on the candidate feature set is tested, and performance feedback information is utilized in selecting a new candidate feature set.” Using the performance of a machine learning model trained based on the candidate feature set to select a new candidate feature set is using the machine learning model as an estimation model for selecting the new set of essential input parameters. Claim 20 further recites “and generating the optimal input data using essential input data corresponding to the at least one essential input parameter and the sample output data.” The claim term “optimal input data” is interpreted in light of Specification [0032] “Here, the term ‘optimal input data’ may be understood as data that must be applied (e.g., required input data) to the design simulator in order to obtain the target output data using the design simulator.” Sharma paragraph 59 discloses “the computer system processes the training dataset based on an optimization model (e.g., multivariate effect optimization model 104A) to select, from the plurality of features, a subset of features to include in a reduced feature set.” The selected reduced feature set corresponds with generated optimal input data. The reduced input set corresponds with data which is required or essential input data. Sharma does not explicitly disclose the training dataset including input data and output data specifically; however, in analogous art of machine learning for semiconductor design, Wong column 6 lines 15-22 teach: Features of each prior simulation (e.g., input variables, simulation parameters, and output results) are extracted and provided to a machine learning model. The machine learning model applies an algorithm as described herein to minimize an error between the model output (e.g., optimized predicted simulator system settings) and the prior simulations results (e.g., simulation output based upon the available input variables and simulation parameters). Features including input variables and output results for providing to a machine learning corresponds with training data including sample input data and sample output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma and Wong. One having ordinary skill in the art would have found motivation to use input and output features into the system of optimization of machine learning for the advantageous purpose of optimizing TCAD simulation for future simulations. See Wong column 2 lines 20-24. Dependent Claims 7, 8, and 11 Claims 7, 8, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma and Wong as applied to claim 6 above, and further in view of Akande, K., et al. “Investigating the effect of correlation-based feature selection on the performance of neural network in reservoir characterization” J. Natural Gas Science & Engineering, vol. 27, pp. 98-108 (2015) [herein “Akande”]. Claim 7 further recites “7. The computing device of claim 6, wherein the processor is further configured to train the estimation model based on a loss between a sample estimation output data generated by the estimation model in response to the sample input data and the sample output data.” Sharma paragraph 194 last sentence discloses “performance goals (e.g., as measured by log-loss scores, accuracy, or any other suitable performance metric).” A log-loss score is a loss. However, the log-loss function is a measure of classification accuracy and not based on a loss between estimated output data and sample output data. However, in analogous art of correlation-based feature selection with neural network performance, Akande page 100 left column first paragraph teaches: Typically, the minimization objective is to minimize sum of squared errors till a predetermined threshold is reached which signifies that the system has attained a satisfactory performance based on the predefined criteria A minimizing a sum of squared errors is training (performance feedback) based on a loss between estimation output data and sample output data. Akande page 103 section 2.5.2 teaches: 2.5.2. Root mean-squared error, (RMSE) This is calculated by taking the mean of the square of the differences between each of the predicted output xi and its corresponding actual value yi and then taking the square root of the resulting value. RMSE is a loss function generated in response to an input and output sample data. The y values correspond with the output sample data. The x values correspond with an estimation model response to respective input. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma, Wong, and Akande. One having ordinary skill in the art would have found motivation to use RMSE into the system of optimization of machine learning because RMSE has art recognized suitability for the intended purpose of “Assessment criteria for performance evaluation.” See Akande page 103 section 2.5 heading and MPEP §2144.07. Claim 8 further recites “8. The computing device of claim 7, wherein the processor is further configured to calculate the loss based on a difference between the sample estimation output data and the sample output data.” Sharma paragraph 194 last sentence discloses “performance goals (e.g., as measured by log-loss scores, accuracy, or any other suitable performance metric).” A log-loss score is a loss. However, the log-loss function is a measure of classification accuracy and not based on a loss between estimated output data and sample output data. However, in analogous art of correlation-based feature selection with neural network performance, Akande page 100 left column first paragraph teaches: Typically, the minimization objective is to minimize sum of squared errors till a predetermined threshold is reached which signifies that the system has attained a satisfactory performance based on the predefined criteria A minimizing a sum of squared errors is training (performance feedback) based on a loss between estimation output data and sample output data. Akande page 103 section 2.5.2 teaches: 2.5.2. Root mean-squared error, (RMSE) This is calculated by taking the mean of the square of the differences between each of the predicted output xi and its corresponding actual value yi and then taking the square root of the resulting value. RMSE is a loss function generated in response to an input and output sample data. The y values correspond with the output sample data. The x values correspond with an estimation model response to respective input. Akande page 103 equation (10) RMSE equation shows that RMSE has a difference between x and y values correspond with a difference between output parameters and estimation output data. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma, Wong, and Akande. One having ordinary skill in the art would have found motivation to use RMSE into the system of optimization of machine learning because RMSE has art recognized suitability for the intended purpose of “Assessment criteria for performance evaluation.” See Akande page 103 section 2.5 heading and MPEP §2144.07. Claim 8 further recites “and a difference between a change amount of the output parameters associated with the sample estimation output data and a change amount of the output parameters associated with the sample output data.” Sharma paragraph 66 discloses “compare a performance of the first and second machine learning models and, based on this comparison, select either the reduced feature set or the second reduced feature set as a final feature set for the training dataset (e.g., training dataset 110).” Comparing a performance between a first and second machine learning model corresponds with comparing respective performance values as a change amount. As combined above with using RMSE as the performance assessment, the change in performance amount corresponds with a change in estimation output data and a change in output data, respectively, for respective x and y values. Claim 11 further recites “11. The computing device of claim 9, wherein the processor includes a plurality of processors, and a plurality of essential input parameters is selected in parallel using the plurality of processors in relation to the estimation blocks.” Sharma paragraph 311 disclose “Processor subsystem 1520 may include one or more processors or processing units.” More processors plural is a plurality of processors. Sharma paragraph 65 discloses “Method 500 may include training any of various suitable types of machine learning models, as desired. In some embodiments, for example, a reduced feature set selected according to the disclosed techniques may be used to train an artificial neural network (‘ANN’) implemented using any suitable neural network architecture.” Wong column 16 lines 39-40 teach “A process or algorithm can be partitioned into multiple threads that can be executed in parallel.” But Wong fails to teach that a selection of a reduced feature set (i.e. by a machine learning model) is one of the processes that should be executed in parallel. Neither Sharma nor Wong explicitly disclose estimation blocks operating in parallel; however, in analogous art of correlation-based feature selection with neural network performance, Akande page 99 section 2.1 teaches “ANN is a powerful learning algorithm which has proven very successful in learning complex patterns existing between variables. ANN learns in a parallel and distributed manner.” It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma, Wong, and Akande. One having ordinary skill in the art would have found motivation to use an ANN operating in parallel into the system of optimization of machine learning for the advantageous purpose of “learning complex patterns existing between variables.” See Akande page 99 section 2.1. Dependent Claims 15, 16, and 19 Claims 15, 16, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma and Wong as applied to claim 14 above, and further in view of US 2023/0334360 A1 Kormilitsin, et al. [herein “Kormilitsin”]. Claim 15 further recites “15. The computing device of claim 14, wherein the processor is further configured to determine whether termination conditions have been satisfied, and if the termination conditions have been satisfied, set recommendation data yielding maximum value for the acquisition function as the optimal input data, else if the termination conditions have not been satisfied, generate recommendation output data from the recommendation input data, and retrain the estimation model using the recommendation data including both the recommendation input data and the recommendation output data.” Sharma paragraph 66 last sentence discloses “compare a performance of the first and second machine learning models and, based on this comparison, select either the reduced feature set or the second reduced feature set as a final feature set for the training dataset (e.g., training dataset 110).” Sharma paragraph 189 discloses “the disclosed feedback-assisted feature-selection techniques are an iterative process in which a candidate feature set is selected.” Sharma paragraph 207 disclose: Feed-back-assisted optimization model 104D utilizes the multivariate effect optimization model 104A that selects the subset of feature that maximizes a measure of relevancy between pairs of the features 302 and the set of labels 204 for the data samples 112. Maximizing a measure of relevancy corresponds with yielding a maximum value for an acquisition function. Sharma does not explicitly disclose a termination condition of the iterative process; however, in analogous art of parameter selection, Kormilitsin paragraph 43 teaches: a convergence score is calculated based on the updated bucket ranking 418 and the initial bucket ranking 402. The decision block 422 may determine to iterate based on this convergence score. For example, a low convergence score (e.g., less than a threshold) may indicate that, in response to the most-recent iteration, many features changed which bucket they belong to (i.e., the method 400 has not converged). In this case, the method 400 starts a next iteration, using the updated bucket ranking 418 as the initial bucket ranking 402 for this next iteration. A high convergence score (e.g., above a threshold) may indicate that the updated bucket ranking 418 is so similar to the initial bucket ranking 402 that an additional iteration is unlikely to yield significant additional changes to bucket rank (i.e., the method 400 has converged). A convergence score threshold corresponds with a termination condition. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma, Wong, and Kormilitsin. One having ordinary skill in the art would have found motivation to use a convergence threshold into the system of optimization of machine learning for the advantageous purpose of determining how many iterations. See Kormilitsin ¶43. Claim 16 further recites “16. The computing device of claim 15, wherein the processor is further configured to generate recommendation input data following retraining of the estimation model using the recommendation data, and again determine whether termination conditions have been satisfied.” Sharma paragraph 189 discloses “the disclosed feedback-assisted feature-selection techniques are an iterative process in which a candidate feature set is selected.” Sharma paragraph 207 disclose: Feed-back-assisted optimization model 104D utilizes the multivariate effect optimization model 104A that selects the subset of feature that maximizes a measure of relevancy between pairs of the features 302 and the set of labels 204 for the data samples 112. Maximizing a measure of relevancy corresponds with yielding a maximum value for an acquisition function. Sharma does not explicitly disclose a termination condition of the iterative process; however, in analogous art of parameter selection, Kormilitsin paragraph 43 teaches “a convergence score is calculated based on the updated bucket ranking 418 and the initial bucket ranking 402.” The updated ranking corresponds with a retraining. The convergence score termination condition accordingly corresponds with an iterative updated process. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma, Wong, and Kormilitsin. One having ordinary skill in the art would have found motivation to use a convergence threshold into the system of optimization of machine learning for the advantageous purpose of determining how many iterations. See Kormilitsin ¶43. Claim 19 further recites “19. The method of claim 18, wherein the generating of the optimal input data includes: retraining the estimation model using the essential input data and the sample output data; determining whether termination conditions have been satisfied; and if the termination conditions have been satisfied, generating recommendation input data yielding maximum value for an acquisition function according to the estimation model following retraining of the estimation model using recommendation data including the recommendation input data and recommendation output data generated from the recommendation input data.” Sharma paragraph 189 disclose “the performance of a machine learning model trained based on the candidate feature set is tested, and performance feedback information is utilized in selecting a new candidate feature set.” Training a machine learning model based on the candidate feature set corresponds with a retraining of the estimation model in accordance with the essential input data. Selecting a new candidate feature set corresponds with generating recommendations following the retraining. Sharma paragraph 190 discloses “the feedback-assisted optimization model 104D is provided as follows: [equation (4)].” The feedback-assisted optimization model equation corresponds with an acquisition function used to help generate the recommendation output. Sharma paragraph 66 last sentence discloses “compare a performance of the first and second machine learning models and, based on this comparison, select either the reduced feature set or the second reduced feature set as a final feature set for the training dataset (e.g., training dataset 110).” Sharma paragraph 189 discloses “the disclosed feedback-assisted feature-selection techniques are an iterative process in which a candidate feature set is selected.” Sharma paragraph 207 disclose: Feed-back-assisted optimization model 104D utilizes the multivariate effect optimization model 104A that selects the subset of feature that maximizes a measure of relevancy between pairs of the features 302 and the set of labels 204 for the data samples 112. Maximizing a measure of relevancy corresponds with yielding a maximum value for an acquisition function. Sharma does not explicitly disclose a termination condition of the iterative process; however, in analogous art of parameter selection, Kormilitsin paragraph 43 teaches: a convergence score is calculated based on the updated bucket ranking 418 and the initial bucket ranking 402. The decision block 422 may determine to iterate based on this convergence score. For example, a low convergence score (e.g., less than a threshold) may indicate that, in response to the most-recent iteration, many features changed which bucket they belong to (i.e., the method 400 has not converged). In this case, the method 400 starts a next iteration, using the updated bucket ranking 418 as the initial bucket ranking 402 for this next iteration. A high convergence score (e.g., above a threshold) may indicate that the updated bucket ranking 418 is so similar to the initial bucket ranking 402 that an additional iteration is unlikely to yield significant additional changes to bucket rank (i.e., the method 400 has converged). A convergence score threshold corresponds with a termination condition. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Sharma, Wong, and Kormilitsin. One having ordinary skill in the art would have found motivation to use a convergence threshold into the system of optimization of machine learning for the advantageous purpose of determining how many iterations. See Kormilitsin ¶43. Conclusion Prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhang, M., et al. “Development of input variable selection and structural optimization algorithm for recurrent neural network” Proceedings of 40th Chinese Control Conf., pp. 8094-8099 (2021) teaches Input variable selection for RNN. Page 8097 algorithm 1 teaches a dataset in x,y pairs of a training dataset and a testing dataset Roy, D., et al. “Feature Selection using Deep Neural Networks” IEEE Int’l Joint Conf. on Neural Networks, pp. 1-6 (2015) Feature selection using PCA May, R., et al. "Review of Input Variable Selection Methods for Artificial Neural Networks" Artificial Neural Networks - Methodological Advances & Biomedical Applications (2011) Page 41 table 2 lists several different input variable selection algorithms US 20230059132 A1 Li; Quanzheng et al. Deep Learning for Inverse Problems Without Training Data US 11645555 B2 Mroueh; Youssef et al. Feature selection using Sobolev Independence Criterion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jay B Hann whose telephone number is (571)272-3330. The examiner can normally be reached M-F 10am-7pm EDT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Renee Chavez can be reached at (571) 270-1104. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jay Hann/Primary Examiner, Art Unit 2186 7 March 2026
Read full office action

Prosecution Timeline

Nov 25, 2022
Application Filed
Mar 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12580384
AUTOMATION TOOL TO CREATE CHRONOLOGICAL AC POWER FLOW CASES FOR LARGE INTERCONNECTED SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12573182
COMPUTER VISION AND SPEECH ALGORITHM DESIGN SERVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12560740
METHOD FOR MODELLING THE FORMATION OF A SEDIMENTARY BASIN USING A STRATIGRAPHIC FORWARD MODELING PROGRAM
2y 5m to grant Granted Feb 24, 2026
Patent 12560741
System and Method to Develop Naturally Fractured Hydrocarbon Reservoirs Using A Fracture Density Index
2y 5m to grant Granted Feb 24, 2026
Patent 12560067
METHOD FOR HYDRAULIC FRACTURING AND MITIGATING PROPPANT FLOWBACK
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
95%
With Interview (+34.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 463 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month