Prosecution Insights
Last updated: April 19, 2026
Application No. 17/137,773

SEMICONDUCTOR DESIGN OPTIMIZATION USING AT LEAST ONE NEURAL NETWORK

Final Rejection §103§112
Filed
Dec 30, 2020
Examiner
WU, NICHOLAS S
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Semiconductor Components Industries LLC
OA Round
4 (Final)
47%
Grant Probability
Moderate
5-6
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
18 granted / 38 resolved
-7.6% vs TC avg
Strong +43% interview lift
Without
With
+43.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
44 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 07/07/2025 (“Remarks”) have been fully considered but they are not persuasive. Regarding the 103 rejections, applicant's arguments filed with respect to the prior art rejections have been fully considered but they are moot. Applicant has amended the claims to recite new combinations of limitations. Regarding applicant's arguments, under further search and consideration, the prior art references still teach the amended limitations to the independent claims. Specifically, primary reference Kim has cited portions that cover the new amended limitations of “wherein execution of the process-level simulation program generates at least a portion of training data for the first neural network, the circuit-level simulation program and the process-level simulation program are not executed to generate the design model”. Additionally, secondary reference Wang teaches the newly amended limitations of “associated with a circuit-level simulation program”. Please see the updated 103 rejections, necessitated by Amendment. Claim Rejections - 35 USC § 112: New Matter The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-4, 6-8, 11-15, 17, and 24-30 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 1, the claim recites the limitations “the circuit-level simulation program and the process-level simulation program are not executed to generate the design model.” The limitations are considered new matter because the original disclosure does not appear to provide support for excluding the execution of the circuit-level and process-level simulation programs while generating the design model. The closest support from the original disclosure appears to be not executing the process-level simulation program given set process conditions (see Specification, ⁋72, “For example, for a given set of process conditions (as provided by the first parameters 411), the first neural network 414-1 (after being trained) can predict the second parameters 413 (e.g., the SPICE model parameters or the SPICE simulations). Then, the neural network 414-2 can be trained using only the second parameters 413 (e.g., the SPICE simulations), where the neural network 412-2 can be used to predict system level characteristics such as efficiency. In this matter, additional TCAD simulations do not have to be executed because the first neural network 414-1 can predict what the SPICE model parameters will be for a given set of process conditions, which can decrease the amount of time to generate training data and/or the amount of training data that is required to train the first neural network 414-1 and the second neural network 414-2.”). However, there is not a mention of the circuit-level simulation program and the process-level simulation program are not executed to generate the design model. The courts have determined that the introduction of claim changes which involve narrowing the claims by introducing elements or limitations which are not supported by the as-filed disclosure is a violation of the written description requirement of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph (MPEP 2163.05 Section II). The noted limitations are therefore considered new matter. Regarding claims 11 and 17, the claims recite similar limitations to claim 1 and are therefore rejected under the same rationale. Regarding claims 2-4, 12-15, and 24-30, the claims are rejected for at least their dependence on claims 1, 11, or 17. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1, 3, 11-12, 17, 26-27, and 29-30 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”). Regarding claim 1, Kim discloses: generating a plurality of predictive models by configuring a first neural network and a second neural network, (Kim, ⁋61, “Next, in the example embodiment illustrated in FIG. 10, the first machine learning model and the second machine learning model may sequentially perform training [generating a plurality of predictive models by configuring a first neural network and a second neural network,]. Referring to FIG. 10, first, a sample data set for training may be collected (S40).” and Kim ⁋27, “Embodiments of the inventive concept are described herein in the context of a semiconductor fabrication process simulation system that includes one or more machine learning engines. It will be understood that embodiments of the inventive concept are not limited to particular implementations of the semiconductor fabrication process simulation system and various types of Artificial Intelligence (AI) systems may be used including, but not limited to, a multi-layer neural network [neural network]”). including: configuring the first neural network to predict circuit parameters,…based on first simulation data generated by a process-level simulation program, configured to simulate a fabrication process; (Kim, ⁋62, “The input data input to the first machine learning model may be process parameters input to a simulation tool [based on first simulation data generated by a process-level simulation program, configured to simulate a fabrication process;] in a process of collecting a sample data set. The first machine learning model receiving process parameters may output predictive data representing a result of the process. As described above with reference to FIG. 9, the predictive data may include a junction depth determined by a process, a thickness of a gate dielectric film, an effective channel length, a gate length, and the like.” [including: configuring the first neural network to predict circuit parameters,]). and configuring the second neural network to predict system parameters based on second simulation data, the second simulation data including the circuit parameters predicted by the first neural network, (Kim, ⁋64-65, “When the training of the first machine learning model is completed, an output of the first machine learning model for which training has been completed may be input to a second machine learning model (S44). In an example embodiment, the output of the first machine learning model may include a junction depth determined by a process, a thickness of a gate dielectric film, an effective channel length, a gate length, and the like, and accordingly, a physical structure of a semiconductor device may be expressed. The second machine learning model may receive an output of the first machine learning model representing the physical structure of the semiconductor device; the output of the first model is interpreted as simulation data as its input was simulation data (i.e. based on second simulation data, the second simulation data including the circuit parameters predicted by the first neural network,) and output predictive data representing electrical characteristics of the semiconductor device. [0065] For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”; the threshold voltage, on/off current, etc. electrical characteristics are interpreted as performance metrics or system parameters (i.e. and configuring the second neural network to predict system parameters based on second simulation data,)). the plurality of predictive models including a first predictive model and a second predictive model, the first predictive model configured to predict a first performance metric of a semiconductor device, the second predictive model configured to predict a second performance metric of the semiconductor device; (Kim, ⁋65, “For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”; the second machine learning model is interpreted as the first and second predictive models as the second machine learning model predicts multiple performance metrics (i.e. the plurality of predictive models including a first predictive model and a second predictive model, the first predictive model configured to predict a first performance metric of a semiconductor device, the second predictive model configured to predict a second performance metric of the semiconductor device;)). and generating a design model based on a set of input parameters that are inputted to the plurality of predictive models, (Kim, ⁋78, “Referring to FIG. 14, a decision model 400 [and generating a design model] may be applied to a method of manufacturing a semiconductor device according to an example embodiment of the present inventive concept. The decision model 400 is a module that is configured to adjust values of a process variable 411 and a design variable 412 used for manufacturing a semiconductor device, and may include an input module 410, a machine learning model 420 for which training has been completed according to the above-described example embodiments, a comparison module 430, and the like. In an example embodiment, the input module 410 may set the process variable 411 and the design variable 412 to arbitrary initial values.” [based on a set of input parameters that are inputted to the plurality of predictive models,]). the design model including a set of design parameters for the semiconductor device such that the first performance metric and the second performance metric achieve respective threshold conditions, (Kim, ⁋80, “As a result of the comparison, if a difference between the predictive data and the target data 401 is greater than or equal to a predetermined threshold difference, the input module 410 may adjust values of the process variable 411 and/or the design variable 412 [the design model including a set of design parameters for the semiconductor device such that the first system performance metric and the second system performance metric]. As a result of the comparison, if the difference between the predictive data and the target data 401 is less than a predetermined threshold difference [achieve respective threshold conditions,], the input module 410 may output the values for the adjusted process variable 402 and the adjusted design variable 403.”). wherein execution of the process-level simulation program generates at least a portion of training data for the first neural network, (Kim, ⁋41, “When the machine learning model performs training using a sample data set obtained by simulations using the simulation tool [wherein execution of the process-level simulation program generates at least a portion of training data for the first neural network,]”). …simulation program and the process-level simulation program are not executed to generate the design model. (Kim, 78, “Referring to FIG. 14, a decision model 400 [the design model.] may be applied to a method of manufacturing a semiconductor device according to an example embodiment of the present inventive concept. The decision model 400 is a module that is configured to adjust values of a process variable 411 and a design variable 412 used for manufacturing a semiconductor device, and may include an input module 410, a machine learning model 420 for which training has been completed according to the above-described example embodiments; the machine learning model being used after it has finished training is interpreted as the simulation programs not being executed as the training of the model has finished prior to generating the design model (i.e. …simulation program and the process- level simulation program are not executed to generate the design model.)”). While Kim teaches the use of multiple predictive models for predicting multiple performance metrics, Kim does not explicitly teach: A semiconductor design system comprising: at least one processor; and a non-transitory computer-readable medium storing executable instructions that when executed cause the at least one processor to execute operations, the operations comprising: …associated with a circuit-level simulation program,… Wang teaches: A semiconductor design system comprising: at least one processor; and a non-transitory computer-readable medium storing executable instructions that when executed cause the at least one processor to execute operations, the operations comprising: (Wang, ⁋18, “ a system for generating a model of a transistor includes: a processor; and memory storing instructions that, when executed by the processor, cause the processor to: [A semiconductor design system comprising: at least one processor; and a non-transitory computer-readable medium storing executable instructions that when executed cause the at least one processor to execute operations, the operations comprising:]”). …associated with a circuit-level simulation program,… (Wang, ⁋66, “the computing system runs a simulation (e.g., a SPICE simulation) of a circuit […associated with a circuit-level simulation program,…], where the simulation uses the ported neural network model to simulate the behavior of at least one element of the circuit.”). Kim and Wang are both in the same field of endeavor (i.e. semiconductor simulation). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim and Wang to teach the above limitation(s). The motivation for doing so is that SPICE simulations allow the user to simulate the behavior of an element of a circuit (cf. Wang, ⁋66, “runs a simulation (e.g., a SPICE simulation) of a circuit, where the simulation uses the ported neural network model to simulate the behavior of at least one element of the circuit”). Regarding claim 3, Kim in view of Wang teaches the semiconductor design system of claim 1. Kim further teaches wherein the first simulation data includes technology computer-aided design (TCAD) simulation data, (Kim, ⁋27, “For example, in order to accurately predict the electrical characteristics of semiconductor devices, a machine learning model may be trained in advance by using the electrical characteristics of semiconductor devices predicted using a simulation tool such as a Technology Computer Aided Design (TCAD) [wherein the first simulation data includes technology computer-aided design (TCAD) simulation data,].”). Wang further teaches and the second simulation data includes simulation program with integrated circuit emphasis (SPICE) variables. (Wang, ⁋66, “the computing system runs a simulation (e.g., a SPICE simulation) of a circuit, where the simulation uses the ported neural network model to simulate the behavior of at least one element of the circuit [and the second simulation data includes simulation program with integrated circuit emphasis (SPICE) variables.].”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Wang with the teachings of Kim for the same reasons disclosed in claim 1. Regarding claim 11, Kim discloses: generating a plurality of predictive models by configuring a first neural network and a second neural network, (Kim, ⁋61, “Next, in the example embodiment illustrated in FIG. 10, the first machine learning model and the second machine learning model may sequentially perform training [generating a plurality of predictive models by configuring a first neural network and a second neural network,]. Referring to FIG. 10, first, a sample data set for training may be collected (S40).” and Kim ⁋27, “Embodiments of the inventive concept are described herein in the context of a semiconductor fabrication process simulation system that includes one or more machine learning engines. It will be understood that embodiments of the inventive concept are not limited to particular implementations of the semiconductor fabrication process simulation system and various types of Artificial Intelligence (AI) systems may be used including, but not limited to, a multi-layer neural network [neural network]”). including: configuring the first neural network to predict circuit parameters,…based on first simulation data generated by a process-level simulation program, configured to simulate a fabrication process; (Kim, ⁋62, “The input data input to the first machine learning model may be process parameters input to a simulation tool [based on first simulation data generated by a process-level simulation program, configured to simulate a fabrication process;] in a process of collecting a sample data set. The first machine learning model receiving process parameters may output predictive data representing a result of the process. As described above with reference to FIG. 9, the predictive data may include a junction depth determined by a process, a thickness of a gate dielectric film, an effective channel length, a gate length, and the like.” [including: configuring the first neural network to predict circuit parameters,]). and configuring the second neural network to predict system parameters based on second simulation data, the second simulation data including the circuit parameters predicted by the first neural network, (Kim, ⁋64-65, “When the training of the first machine learning model is completed, an output of the first machine learning model for which training has been completed may be input to a second machine learning model (S44). In an example embodiment, the output of the first machine learning model may include a junction depth determined by a process, a thickness of a gate dielectric film, an effective channel length, a gate length, and the like, and accordingly, a physical structure of a semiconductor device may be expressed. The second machine learning model may receive an output of the first machine learning model representing the physical structure of the semiconductor device; the output of the first model is interpreted as simulation data as its input was simulation data (i.e. based on second simulation data, the second simulation data including the circuit parameters predicted by the first neural network,) and output predictive data representing electrical characteristics of the semiconductor device. [0065] For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”; the threshold voltage, on/off current, etc. electrical characteristics are interpreted as performance metrics or system parameters (i.e. and configuring the second neural network to predict system parameters based on second simulation data,)). the plurality of predictive models including a first predictive model configured to predict efficiency of a semiconductor device and a second predictive model configured to predict breakdown voltage of the semiconductor device; (Kim, ⁋65, “For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”; the second machine learning model is interpreted as the first and second predictive models as the second machine learning model predicts multiple performance metrics and a current-voltage characteristic is interpreted as an efficiency and a threshold voltage is interpreted as a breakdown voltage (i.e. the plurality of predictive models including a first predictive model configured to predict efficiency of a semiconductor device and a second predictive model configured to predict breakdown voltage of the semiconductor device;)). receiving a set of input parameters for designing the semiconductor device; (Kim, ⁋78, “Referring to FIG. 14, a decision model 400 may be applied to a method of manufacturing a semiconductor device according to an example embodiment of the present inventive concept. The decision model 400 is a module that is configured to adjust values of a process variable 411 and a design variable 412 used for manufacturing a semiconductor device, and may include an input module 410, a machine learning model 420 for which training has been completed according to the above-described example embodiments, a comparison module 430, and the like. In an example embodiment, the input module 410 may set the process variable 411 and the design variable 412 to arbitrary initial values.” [receiving a set of input parameters for designing the semiconductor device;]). and generating a design model for the semiconductor device such that the efficiency and the breakdown voltage achieve respective threshold conditions, (Kim, ⁋78, “Referring to FIG. 14, a decision model 400 [and generating a design model for the semiconductor device] may be applied to a method of manufacturing a semiconductor device according to an example embodiment of the present inventive concept. The decision model 400 is a module that is configured to adjust values of a process variable 411 and a design variable 412 used for manufacturing a semiconductor device, and may include an input module 410, a machine learning model 420 for which training has been completed according to the above-described example embodiments, a comparison module 430, and the like. In an example embodiment, the input module 410 may set the process variable 411 and the design variable 412 to arbitrary initial values.” and Kim, ⁋80, “As a result of the comparison, if a difference between the predictive data and the target data 401 is greater than or equal to a predetermined threshold difference, the input module 410 may adjust values of the process variable 411 and/or the design variable 412 [such that the efficiency and the breakdown voltage]. As a result of the comparison, if the difference between the predictive data and the target data 401 is less than a predetermined threshold difference [achieve respective threshold conditions,], the input module 410 may output the values for the adjusted process variable 402 and the adjusted design variable 403.” And Kim, ⁋65, “For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”). the design model including a set of design parameters for the semiconductor device such that the first performance metric and the second performance metric achieve respective threshold conditions, (Kim, ⁋80, “As a result of the comparison, if a difference between the predictive data and the target data 401 is greater than or equal to a predetermined threshold difference, the input module 410 may adjust values of the process variable 411 and/or the design variable 412 [the design model including a set of design parameters for the semiconductor device such that the first system performance metric and the second system performance metric]. As a result of the comparison, if the difference between the predictive data and the target data 401 is less than a predetermined threshold difference [achieve respective threshold conditions,], the input module 410 may output the values for the adjusted process variable 402 and the adjusted design variable 403.”). wherein execution of the process-level simulation program generates at least a portion of training data for the first neural network, (Kim, ⁋41, “When the machine learning model performs training using a sample data set obtained by simulations using the simulation tool [wherein execution of the process-level simulation program generates at least a portion of training data for the first neural network,]”). …simulation program and the process- level simulation program are not executed to generate the design model. (Kim, ⁋78, “Referring to FIG. 14, a decision model 400 [the design model.] may be applied to a method of manufacturing a semiconductor device according to an example embodiment of the present inventive concept. The decision model 400 is a module that is configured to adjust values of a process variable 411 and a design variable 412 used for manufacturing a semiconductor device, and may include an input module 410, a machine learning model 420 for which training has been completed according to the above-described example embodiments; the machine learning model being used after it has finished training is interpreted as the simulation programs not being executed as the training of the model has finished prior to generating the design model (i.e. …simulation program and the process-level simulation program are not executed to generate the design model.)”). While Kim teaches the use of multiple predictive models for predicting multiple performance metrics, Kim does not explicitly teach: A non-transitory computer-readable medium storing executable instructions that when executed by at least one processor is configured to cause the at least one processor to execute operations, the operations comprising: …associated with a circuit-level simulation program,… Wang teaches: A non-transitory computer-readable medium storing executable instructions that when executed by at least one processor is configured to cause the at least one processor to execute operations, the operations comprising: (Wang, ⁋18, “ a system for generating a model of a transistor includes: a processor; and memory storing instructions that, when executed by the processor, cause the processor to: [A non-transitory computer-readable medium storing executable instructions that when executed by at least one processor is configured to cause the at least one processor to execute operations, the operations comprising:]”). …associated with a circuit-level simulation program,… (Wang, ⁋66, “the computing system runs a simulation (e.g., a SPICE simulation) of a circuit […associated with a circuit-level simulation program,…], where the simulation uses the ported neural network model to simulate the behavior of at least one element of the circuit.”). Kim and Wang are both in the same field of endeavor (i.e. semiconductor simulation). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim and Wang to teach the above limitation(s). The motivation for doing so is that SPICE simulations allow the user to simulate the behavior of an element of a circuit (cf. Wang, ⁋66, “runs a simulation (e.g., a SPICE simulation) of a circuit, where the simulation uses the ported neural network model to simulate the behavior of at least one element of the circuit”). Regarding claim 12, Kim in view of Wang teaches the non-transitory computer-readable medium of claim 11. Kim further teaches wherein the first neural network and the second neural network are each implemented as deep neural networks with at least two hidden layer. (Kim, 27, “Embodiments of the inventive concept are described herein in the context of a semiconductor fabrication process simulation system that includes one or more machine learning engines. It will be understood that embodiments of the inventive concept are not limited to particular implementations of the semiconductor fabrication process simulation system and various types of Artificial Intelligence (AI) systems may be used including, but not limited to, a multi-layer neural network, a deep learning system; it is known in the art that deep neural networks is interpreted as having at least 2 hidden layers (i.e. wherein the first neural network and the second neural network are each implemented as deep neural networks with at least two hidden layer.)”). Regarding claim 17, Kim discloses: A method for semiconductor design system, the method comprising: generating a plurality of predictive models by configuring a first neural network and a second neural network, (Kim, ⁋61, “Next, in the example embodiment illustrated in FIG. 10, the first machine learning model and the second machine learning model may sequentially perform training [A method for semiconductor design system, the method comprising: generating a plurality of predictive models by configuring a first neural network and a second neural network,]. Referring to FIG. 10, first, a sample data set for training may be collected (S40).” and Kim ⁋27, “Embodiments of the inventive concept are described herein in the context of a semiconductor fabrication process simulation system that includes one or more machine learning engines. It will be understood that embodiments of the inventive concept are not limited to particular implementations of the semiconductor fabrication process simulation system and various types of Artificial Intelligence (AI) systems may be used including, but not limited to, a multi-layer neural network [neural network]”). including: configuring the first neural network to predict circuit parameters,…based on first simulation data generated by a process-level simulation program, configured to simulate a fabrication process; (Kim, ⁋62, “The input data input to the first machine learning model may be process parameters input to a simulation tool [based on first simulation data generated by a process-level simulation program, configured to simulate a fabrication process;] in a process of collecting a sample data set. The first machine learning model receiving process parameters may output predictive data representing a result of the process. As described above with reference to FIG. 9, the predictive data may include a junction depth determined by a process, a thickness of a gate dielectric film, an effective channel length, a gate length, and the like.” [including: configuring the first neural network to predict circuit parameters,]). and configuring the second neural network to predict system parameters based on second simulation data, the second simulation data including the circuit parameters predicted by the first neural network, (Kim, ⁋64-65, “When the training of the first machine learning model is completed, an output of the first machine learning model for which training has been completed may be input to a second machine learning model (S44). In an example embodiment, the output of the first machine learning model may include a junction depth determined by a process, a thickness of a gate dielectric film, an effective channel length, a gate length, and the like, and accordingly, a physical structure of a semiconductor device may be expressed. The second machine learning model may receive an output of the first machine learning model representing the physical structure of the semiconductor device; the output of the first model is interpreted as simulation data as its input was simulation data (i.e. based on second simulation data, the second simulation data including the circuit parameters predicted by the first neural network,) and output predictive data representing electrical characteristics of the semiconductor device. [0065] For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”; the threshold voltage, on/off current, etc. electrical characteristics are interpreted as performance metrics or system parameters (i.e. and configuring the second neural network to predict system parameters based on second simulation data,)). the first simulation data and second simulation data being associated with different physical domains, (Kim, ⁋62, “The input data input to the first machine learning model may be process parameters input to a simulation tool [the first simulation data] in a process of collecting a sample data set. The first machine learning model receiving process parameters may output predictive data representing a result of the process.” And Kim, ⁋64, “In an example embodiment, the output of the first machine learning model may include a junction depth determined by a process, a thickness of a gate dielectric film, an effective channel length, a gate length, and the like, and accordingly, a physical structure of a semiconductor device may be expressed [and second simulation data]. The second machine learning model may receive an output of the first machine learning model representing the physical structure of the semiconductor device; the first data being process data and the second data being circuit data is interpreted as the two data being in different physical domains (i.e. being associated with different physical domains,)). the plurality of predictive models including a first predictive model configured to predict a first performance metric of a semiconductor device and a second predictive model configured to predict a second performance metric of the semiconductor device; (Kim, ⁋65, “For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”; the second machine learning model is interpreted as the first and second predictive models as the second machine learning model predicts multiple performance metrics (i.e. the plurality of predictive models including a first predictive model and a second predictive model, the first predictive model configured to predict a first performance metric of a semiconductor device, the second predictive model configured to predict a second performance metric of the semiconductor device;)). receiving a set of input parameters for designing the semiconductor device; (Kim, ⁋78, “Referring to FIG. 14, a decision model 400 may be applied to a method of manufacturing a semiconductor device according to an example embodiment of the present inventive concept. The decision model 400 is a module that is configured to adjust values of a process variable 411 and a design variable 412 used for manufacturing a semiconductor device, and may include an input module 410, a machine learning model 420 for which training has been completed according to the above-described example embodiments, a comparison module 430, and the like. In an example embodiment, the input module 410 may set the process variable 411 and the design variable 412 to arbitrary initial values.” [receiving a set of input parameters for designing the semiconductor device;]). and generating a set of design parameters for a design model of the semiconductor device such that the first performance metric and the second performance metric achieve respective threshold conditions, (Kim, ⁋78, “Referring to FIG. 14, a decision model 400 [a design model of the semiconductor device] may be applied to a method of manufacturing a semiconductor device according to an example embodiment of the present inventive concept. The decision model 400 is a module that is configured to adjust values of a process variable 411 and a design variable 412 used for manufacturing a semiconductor device [and generating a set of design parameters for], and may include an input module 410, a machine learning model 420 for which training has been completed according to the above-described example embodiments, a comparison module 430, and the like. In an example embodiment, the input module 410 may set the process variable 411 and the design variable 412 to arbitrary initial values.” and Kim, ⁋80, “As a result of the comparison, if a difference between the predictive data and the target data 401 is greater than or equal to a predetermined threshold difference, the input module 410 may adjust values of the process variable 411 and/or the design variable 412 [such that the first performance metric and the second performance metric]. As a result of the comparison, if the difference between the predictive data and the target data 401 is less than a predetermined threshold difference [achieve respective threshold conditions,], the input module 410 may output the values for the adjusted process variable 402 and the adjusted design variable 403.” And Kim, ⁋65, “For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”). wherein execution of the process-level simulation program generates at least a portion of training data for the first neural network, (Kim, ⁋41, “When the machine learning model performs training using a sample data set obtained by simulations using the simulation tool [wherein execution of the process-level simulation program generates at least a portion of training data for the first neural network,]”). …simulation program and the process-level simulation program are not executed to generate the design model. (Kim, ⁋78, “Referring to FIG. 14, a decision model 400 [the design model.] may be applied to a method of manufacturing a semiconductor device according to an example embodiment of the present inventive concept. The decision model 400 is a module that is configured to adjust values of a process variable 411 and a design variable 412 used for manufacturing a semiconductor device, and may include an input module 410, a machine learning model 420 for which training has been completed according to the above-described example embodiments; the machine learning model being used after it has finished training is interpreted as the simulation programs not being executed as the training of the model has finished prior to generating the design model (i.e. …simulation program and the process- level simulation program are not executed to generate the design model.)”). While Kim teaches the use of multiple predictive models for predicting multiple performance metrics, Kim does not explicitly teach: …associated with a circuit-level simulation program,… Wang teaches: …associated with a circuit-level simulation program,… (Wang, ⁋66, “the computing system runs a simulation (e.g., a SPICE simulation) of a circuit […associated with a circuit-level simulation program,…], where the simulation uses the ported neural network model to simulate the behavior of at least one element of the circuit.”). Kim and Wang are both in the same field of endeavor (i.e. semiconductor simulation). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim and Wang to teach the above limitation(s). The motivation for doing so is that SPICE simulations allow the user to simulate the behavior of an element of a circuit (cf. Wang, ⁋66, “runs a simulation (e.g., a SPICE simulation) of a circuit, where the simulation uses the ported neural network model to simulate the behavior of at least one element of the circuit”). Regarding claim 26, Kim in view of Wang teaches the semiconductor design system of claim 1. Kim further teaches: wherein generating the design model includes selecting a subset of the plurality of predictive models based on inclusion of the first performance metric and the second performance metric in the set of input parameters, (Kim, ⁋64-65, “The second machine learning model may receive an output of the first machine learning model representing the physical structure of the semiconductor device and output predictive data representing electrical characteristics of the semiconductor device. [0065] For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like.”; the threshold voltage, on/off current, etc. electrical characteristics are interpreted as performance metrics or system parameters and the determination on which characteristics are chosen is interpreted as selecting a subset of models as the second model is interpreted as the plurality of models (i.e. wherein generating the design model includes selecting a subset of the plurality of predictive models based on inclusion of the first performance metric and the second performance metric in the set of input parameters,)). the subset being selected such that each predictive model in the subset corresponds to a parameter in the set of input parameters. (Kim, ⁋79, “When at least one of the process variable 411 and the design variable 412 set to an arbitrary initial value is input to the machine learning model 420, the machine learning model 420 may output predictive data for the electrical characteristics of the semiconductor device. The comparison module 430 may compare the predictive data and target data 401 and may transmit a comparison result to the input module 410. For example, the target data 401 may include device data corresponding to electrical characteristics of a semiconductor device included in the semiconductor device and process data corresponding to a process of manufacturing the semiconductor device. In an example embodiment, the device data may include a threshold voltage, an off current of a semiconductor device, an on current of a semiconductor device, and the like; the target data including the performance metrics is interpreted as a model, in the selected subset, corresponding to the set of input parameters as the target data is compared to the input/output result (i.e. the subset being selected such that each predictive model in the subset corresponds to a parameter in the set of input parameters.), and the process data may include a thickness of a gate insulating film, a channel length, a junction depth of the semiconductor device, and the like.”). Regarding claims 27 and 29, the claims are similar to claim 3 and rejected under the same rationale. Regarding claim 30, Kim in view of Wang teaches the method of claim 17. Kim further teaches wherein the first neural network and the second neural network are arranged in a cascaded configuration such that output parameters of the first neural network are provided as input to the second neural network. (Kim, ⁋64, “When the training of the first machine learning model is completed, an output of the first machine learning model for which training has been completed may be input to a second machine learning model (S44). In an example embodiment, the output of the first machine learning model may include a junction depth determined by a process, a thickness of a gate dielectric film, an effective channel length, a gate length, and the like, and accordingly, a physical structure of a semiconductor device may be expressed. The second machine learning model may receive an output of the first machine learning model representing the physical structure of the semiconductor device and output predictive data representing electrical characteristics of the semiconductor device [wherein the first neural network and the second neural network are arranged in a cascaded configuration such that output parameters of the first neural network are provided as input to the second neural network.].”). Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”) and further in view of Aeloiza, et al., US Pre-Grant Publication 2019/0033362A1 (“Aeloiza”). Regarding claim 2, Kim in view of Wang teaches the semiconductor design system of claim 1. Kim further teaches wherein the first performance metric includes efficiency, (Kim, ⁋65, “For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic; current-voltage characteristic is interpreted as an efficiency (i.e. wherein the first performance metric includes efficiency,), an on current, an off current of a semiconductor device, and the like.”). While Kim in view of Wang teaches the use of multiple predictive models for predicting multiple performance metrics using simulated circuit inputs, the combination does not explicitly teach and the second performance metric includes on-resistance. Aeloiza teaches and the second performance metric includes on-resistance. (Aeloiza, ⁋1, “Device failure can be predicted by monitoring physical or electrical characteristics of a semiconductor, such as devices' ON-resistance [and the second performance metric includes on-resistance.]”). Kim, in view of Wang, and Aeloiza are both in the same field of endeavor (i.e. semiconductor performance). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang, and Aeloiza to teach the above limitation(s). The motivation for doing so is that on-resistance is an important performance metric as it is an indicator of semiconductor failure (cf. Aeloiza, ⁋1, “Device failure can be predicted by monitoring physical or electrical characteristics of a semiconductor, such as devices' ON-resistance”). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”) and further in view of Maseeh, et al., US Patent Publication 6116766A (“Maseeh”) and Simpson, Non-Patent Literature “PRIDE: An Integrated Design Environment for Semiconductor Device Simulation” (“Simpson”). Regarding claim 4, Kim in view of Wang teaches the semiconductor design system of claim 1. While Kim in view of Wang teaches the use of multiple predictive models for predicting multiple performance metrics using simulated circuit inputs, the combination does not explicitly teach wherein the design model includes a first visual object that graphically represents the fabrication process for creating the semiconductor device, a second visual object that graphically illustrates parameters for packaging the semiconductor device, and a third visual object that graphically illustrates doping impurities of the semiconductor device. Maseeh teaches wherein the design model includes a first visual object that graphically represents the fabrication process for creating the semiconductor device, a second visual object that graphically illustrates parameters for packaging the semiconductor device, and a third visual object (Maseeh, col. 5 lines 49-58, “Because the present invention models the device at the completion of any of the steps in the fabrication process; modeling the device at any of the steps in the fabrication process is interpreted as having multiple visual objects (i.e. a second visual object…and a third visual object) as well as the device at the completion of the fabrication process [wherein the design model includes a first visual object that graphically represents the fabrication process for creating the semiconductor device], the device may also be visualized upon the completion of any of the individual steps in the Process Table; one of ordinary skill in the art knows that in semiconductor fabrication, packaging is one of the core steps thus is included in the process table (i.e. that graphically illustrates parameters for packaging the semiconductor device,). Visualizing the device after the completion of any of the individual processes yields a true representation of what the geometry and behavior of the actual device will be upon the completion of any of the actual fabrication processes that are performed on it.”). Kim, in view of Wang, and Maseeh are both in the same field of endeavor (i.e. semiconductor fabrication). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang, and Maseeh to teach the above limitation(s). The motivation for doing so is the visual representation of each fabrication process improves designer visibility into the whole fabrication process (cf. Maseeh, col. 5 lines 54-58, “Visualizing the device after the completion of any of the individual processes yields a true representation of what the geometry and behavior of the actual device will be upon the completion of any of the actual fabrication processes that are performed on it.”). While Kim in view of Wang and Maseeh teaches the creation of multiple visualizations multiple fabrication steps, the combination does not explicitly teach a visual object that graphically illustrates doping impurities of the semiconductor device. Simpson teaches a visual object that graphically illustrates doping impurities of the semiconductor device. (Simpson, pg. 1166 see Figure 4, “Specifying, displaying, and checking doping [that graphically illustrates doping impurities of the semiconductor device.] within picasso. A “property panel” is used to specify parameters to an analytical doping distribution. Fill patterns identify different material types and doping polarity. A cross section plot allows the doping profile through any region of the device to be examined. Here, the profile through the p+-emitter of a CMOS trench isolation structure is shown.”). Kim, in view of Wang and Maseeh, and Simpson are both in the same field of endeavor (i.e. semiconductor fabrication). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang and Maseeh, and Simpson to teach the above limitation(s). The motivation for doing so is that separating the simulations into multiple parts lowers the processing cost for interactive graphics (cf. Simpson, pg. col. 1, “By separating the task of device simulation into its component parts, an open, modular architecture for the incorporation of different simulation programs is attained. Moreover, PRIDE exploits the unique characteristics of engineering workstations (fast interactive graphics and low cost compute power), to provide a consistent, user-friendly interface to the device designer.”). Claims 6, 8, 13, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”) and further in view of and Griffith, et al., US Pre-Grant Publication 2019/0050445A1 (“Griffith”). Regarding claim 6, Kim in view of Wang teaches the semiconductor design system of claim 1. Kim further teaches: wherein the operations comprise: receiving source data from a plurality of data sources; (Kim, ⁋54, “First, referring to FIG. 9, operations of a method of training a machine learning model according to an example embodiment of the present inventive concept may begin by collecting a sample data set for training (S30). The sample data set for training can be collected by using a simulation tool based on a physical model. For example, process parameters for controlling a semiconductor process and design parameters representing a structure of a semiconductor device; the process and design parameters are interpreted as source data (i.e. wherein the operations comprise: receiving source data from a plurality of data sources,) to be formed by a semiconductor process may be input into a simulation tool, and reference data representing characteristics of the semiconductor device may be obtained. Process parameters and reference data corresponding thereto, and design parameters and reference data corresponding thereto may be included in the sample data set.”). and partitioning the dataset of filtered data to generate the training data and test data, (Kim, ⁋54, “Process parameters and reference data corresponding thereto, and design parameters and reference data corresponding thereto may be included in the sample data set; the sample data set having reference data and design/process parameters is interpreted as the dataset having both training and test data.” and Kim, ⁋58-59, “Input data may be selected from the sample data set; selecting the input data from the sample data set is interpreted as dividing the dataset as the training data is being taken from the dataset (i.e. and partitioning the dataset of filtered data to generate the training data) and input to a second machine learning model (S34). For example, input data input to the second machine learning model may be design parameters. For example, the second machine learning model receiving design parameters may output predictive data representing electrical characteristics of semiconductor devices having structures according to the design parameters. [0059] The predictive data output from the second machine learning model may be compared with reference data of the sample data set (S34); previously selecting the input/training data from the sample data set is interpreted as dividing the dataset as the dataset contains the test data (i.e. and partitioning the dataset of filtered data to generate…test data).”). wherein the training data is used to configure at least one of the first neural network or the second neural network, (Kim, ⁋57-58, “Training the first machine learning model may be performed using a plurality of sample data sets [wherein the training data is used to configure at least one of the first neural network] to increase the reliability of the first machine learning model and reduce or prevent errors due to bias due to a specific condition. [0058] Input data may be selected from the sample data set and input to a second machine learning model (S34).” [wherein the training data is used to configure at least one of …or the second neural network,]). and the test data is used to evaluate an accuracy of at least a portion of the first neural network or the second neural network. (Kim, ⁋63, “The reference data is data included in the sample data set, and may be data obtained by inputting process parameters into a simulation tool. For example, among the data obtained by inputting process parameters into the simulation tool, the junction depth, the thickness of the gate dielectric film, the effective channel length, the gate length, and the like, as examples above, may be selected as reference data, and the predictive data output by the first machine learning model may be compared with the reference data [and the test data is used to evaluate an accuracy of at least a portion of the first neural network]. The first machine learning model may perform training based on the comparison result (S43).” and Kim, ⁋65, “For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like. The predictive data output from the second machine learning model may be compared with reference data [and the test data is used to evaluate an accuracy of at least a portion of…or the second neural network.] of the sample data set (S45), and the second machine learning model may perform training according to the result thereof (S46).”). While Kim in view of Wang teaches dividing a dataset from multiple sources into training and test data, the combination does not explicitly teach filtering the source data to obtain a dataset of filtered data; Griffin teaches filtering the source data to obtain a dataset of filtered data; (Griffith, ⁋98, “Match filter 658 may include any number of filter types 658 a, 658 b, and 658 n, each of which may be configured to receive a stream of data representing a column 656 of data; the stream of data is interpreted as the source data (i.e. filtering the source data to obtain a dataset of filtered data;). Kim, in view of Wang, and Griffith are both in the same field of endeavor (i.e. data collection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang, and Griffith to teach the above limitation(s). The motivation for doing so is removing anomalous data points increases the reliability of the dataset (cf. Griffith, ⁋7-8, “Thus, conventional approaches are less effective in data “wrangling” (i.e., cleaning and integrating ‘messy’ and ‘sophisticated’ data arrangements), which, in turn causes formation of unreliable data sets. Unfortunately, the relative unreliability of conventional techniques to remove defects in data thereby reduces others' confidence in using such data, which frustrates or impedes the repurposing or sharing of a dataset generated by the aforementioned techniques. Thus, what is needed is a solution for facilitating techniques to optimize linking of datasets, without the limitations of conventional techniques.”). Regarding claim 8, Kim in view of Wang and Griffith teaches the semiconductor design system of claim 6. Kim further teaches wherein the plurality of data sources include a first data source storing first source data and a second data source storing second source data, (Kim, ⁋54, “First, referring to FIG. 9, operations of a method of training a machine learning model according to an example embodiment of the present inventive concept may begin by collecting a sample data set for training (S30). The sample data set for training can be collected by using a simulation tool based on a physical model. For example, process parameters for controlling a semiconductor process and design parameters representing a structure of a semiconductor device; the process and design parameters are interpreted as a first and second data source (i.e. wherein the plurality of data sources include a first data source storing first source data and a second data source storing second source data,) to be formed by a semiconductor process may be input into a simulation tool, and reference data representing characteristics of the semiconductor device may be obtained. Process parameters and reference data corresponding thereto, and design parameters and reference data corresponding thereto may be included in the sample data set.”). Griffith further teaches: wherein the operations further comprise: selecting a first set of logic rules for the first source data from a domain knowledge database based on the first set of logic rules being associated with the first data source; (Griffith, ⁋98, “In at least some examples, filter types 658 a, 658 b, and 658 n are implemented as probabilistic filters (e.g., Bloom filters) each configured to determine whether a subset of data is either “likely” or “definitely not” in a set of data. Likely subsets of data may be included in data files 690. In some examples, a stream of data representing a column 656 may be processed to compress subsets of data (e.g., via hashing) to apply to each of filter types 658 a, 658 b, and 658 n.”; the probabilistic filters is interpreted as a set of logic rules for a first source data, or the column data (i.e. wherein the operations further comprise: selecting a first set of logic rules for the first source data from a domain knowledge database based on the first set of logic rules being associated with the first data source;)). applying the first set of logic rules to the first source data, including removing one or more missing values within a row or column of the first source data (Griffith, ⁋98, “In some examples, inference engine 632 may be configured to infer a correction for typographical error. For example, if a state abbreviation for Alaska is “AK,” and an instance of “KA” is detected in column 656, inference engine 632 may predict a transposition error and corrective action to resolve the anomaly [applying the first set of logic rules to the first source data,]. Dataset analyzer 630 may be configured to generate a notification to present in a user interface that may alert a user that less than 100% of the data matches the category “state abbreviations,” and may further present the predicted remediation action, such as replacing “KA” with “AK,” should the user so select. Or, such remedial action may be implemented automatically if a confidence level is sufficient enough (e.g., 99.8%) that the replacement of “KA” with “AK” resolves the anomalous condition.”; replacing a value from the column is interpreted as removing a missing value from a column as the value is an incorrect (i.e. including removing one or more missing values within a row or column of the first source data)). and removing one or more values that are not varying within a row or column of the tabular data. (Griffith, ⁋159, “In another example, data representing another property can define an anomaly as “a duplicated row of data” in dataset 2305 a. In this case, the value of the data attribute is extracted from dataset 2305 a and matched against other fields or cells in rows of 2305 a… While not shown, data remediation interface 2302 may present a user input selection with which interface 2302 may invoke an action to modify dataset 2305 a to remediate the condition, such as deleting the duplicate row of data [and removing one or more values that are not varying within a row or column of the first source data;].”). selecting a second set of logic rules for the second source data from the domain knowledge database based on the second set of logic rules being associated with the second data source; and applying the second set of logic rules to the second source data. (Griffith, ⁋96, “Further, inference engine 632 may be configured to further include a subset characterizer 657 and a match filter 658, either or both of which may be implemented. According to various examples, subset characterizer 657 and match filter 658 each may be configured to classify units of data in, for example, a column 656 to determine one or more of a datatype; determining one or more datatypes is interpreted as the same filtering steps applied to the first source data is applied to a second source data (i.e. selecting a second set of logic rules for the second source data from the domain knowledge database based on the second set of logic rules being associated with the second data source; and applying the second set of logic rules to the second source data.), a categorical variable, or any dataset attribute associated with column 656. In one or more implementations, elements depicted in diagram 600 of FIG. 6 may include structures and/or functions as similarly-named or similarly-numbered elements depicted in other drawings.”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Griffith with the teachings of Kim and Wang for the same reasons disclosed in claim 6. Regarding claim 13, Kim in view of Wang teaches the non-transitory computer-readable medium of claim 11. Kim further teaches: wherein the operations further comprise: receiving source data from a plurality of data sources; (Kim, ⁋54, “First, referring to FIG. 9, operations of a method of training a machine learning model according to an example embodiment of the present inventive concept may begin by collecting a sample data set for training (S30). The sample data set for training can be collected by using a simulation tool based on a physical model. For example, process parameters for controlling a semiconductor process and design parameters representing a structure of a semiconductor device; the process and design parameters are interpreted as a first and second data source (i.e. wherein the operations further comprise: receiving source data from a plurality of data sources;) to be formed by a semiconductor process may be input into a simulation tool, and reference data representing characteristics of the semiconductor device may be obtained. Process parameters and reference data corresponding thereto, and design parameters and reference data corresponding thereto may be included in the sample data set.”). partitioning the dataset of filtered data into the training data and the test data, (Kim, ⁋54, “Process parameters and reference data corresponding thereto, and design parameters and reference data corresponding thereto may be included in the sample data set; the sample data set having reference data and design/process parameters is interpreted as the dataset having both training and test data.” and Kim, ⁋58-59, “Input data may be selected from the sample data set; selecting the input data from the sample data set is interpreted as dividing the dataset as the training data is being taken from the dataset (i.e. partitioning the dataset of filtered data into the training data) and input to a second machine learning model (S34). For example, input data input to the second machine learning model may be design parameters. For example, the second machine learning model receiving design parameters may output predictive data representing electrical characteristics of semiconductor devices having structures according to the design parameters. [0059] The predictive data output from the second machine learning model may be compared with reference data of the sample data set (S34); previously selecting the input/training data from the sample data set is interpreted as dividing the dataset as the dataset contains the test data (i.e. partitioning the dataset of filtered data into…test data).”). wherein the training data is used to configure at least one of the first neural network or the second neural network (Kim, ⁋57-58, “Training the first machine learning model may be performed using a plurality of sample data sets [wherein the training data is used to configure at least one of the first neural network] to increase the reliability of the first machine learning model and reduce or prevent errors due to bias due to a specific condition. [0058] Input data may be selected from the sample data set and input to a second machine learning model (S34).” [wherein the training data is used to configure at least one of…or the second neural network]). and the test data is used to evaluate at least a portion of the first neural network or the second neural network. (Kim, ⁋63, “The reference data is data included in the sample data set, and may be data obtained by inputting process parameters into a simulation tool. For example, among the data obtained by inputting process parameters into the simulation tool, the junction depth, the thickness of the gate dielectric film, the effective channel length, the gate length, and the like, as examples above, may be selected as reference data, and the predictive data output by the first machine learning model may be compared with the reference data [and the test data is used to evaluate at least a portion of the first neural network]. The first machine learning model may perform training based on the comparison result (S43).”). (Kim, ⁋65, “[0065] For example, the predictive data output from the second machine learning model may include a threshold voltage, a current-voltage characteristic, an on current, an off current of a semiconductor device, and the like. The predictive data output from the second machine learning model may be compared with reference data [and the test data is used to evaluate at least a portion of…or the second neural network.] of the sample data set (S45), and the second machine learning model may perform training according to the result thereof (S46).”). While Kim in view of Wang teaches a plurality of data from multiple sources to create training and test data, the combination does not explicitly teach generating a dataset of filtered data by filtering the source data using a set of logic rules selected from a domain knowledge database; Griffith teaches generating a dataset of filtered data by filtering the source data using a set of logic rules selected from a domain knowledge database; (Griffith, ⁋98, “Match filter 658 may include any number of filter types 658 a, 658 b, and 658 n, each of which may be configured to receive a stream of data representing a column 656 of data; the stream of data is interpreted as the source data (i.e. generating a dataset of filtered data by). A filter type, such as filter types 658 a, 658 b, and 658 n, may be configured to compute one of two states indicative of whether there is a match to identify a categorical variable. In at least some examples, filter types 658 a, 658 b, and 658 n are implemented as probabilistic filters (e.g., Bloom filters) each configured to determine whether a subset of data is either “likely” or “definitely not” in a set of data. Likely subsets of data may be included in data files 690. In some examples, a stream of data representing a column 656 may be processed to compress subsets of data (e.g., via hashing) to apply to each of filter types 658 a, 658 b, and 658 n.”; the probabilistic filters is interpreted as a set of logic rules for a first source data, or the column data (i.e. filtering the source data using a set of logic rules selected from a domain knowledge database;)). Kim, in view of Wang, and Griffith are both in the same field of endeavor (i.e. data collection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang, and Griffith to teach the above limitation(s). The motivation for doing so is removing anomalous data points increases the reliability of the dataset (cf. Griffith, ⁋7-8, “Thus, conventional approaches are less effective in data “wrangling” (i.e., cleaning and integrating ‘messy’ and ‘sophisticated’ data arrangements), which, in turn causes formation of unreliable data sets. Unfortunately, the relative unreliability of conventional techniques to remove defects in data thereby reduces others' confidence in using such data, which frustrates or impedes the repurposing or sharing of a dataset generated by the aforementioned techniques. Thus, what is needed is a solution for facilitating techniques to optimize linking of datasets, without the limitations of conventional techniques.”). Regarding claim 25, Kim in view of Wang teaches the semiconductor system of claim 1. Kim further teaches wherein generating the plurality of predictive models includes…the training data as seen in claim 1. While Kim in view of Wang teaches dividing a dataset from multiple sources into training and test data, the combination does not explicitly teach filtering the training data using respective domain-specific logic rules selected based on identification of a data source that stores the training data, the filtering includes removing parameters that are non-varying or have one or more missing values. Griffith teaches: filtering the training data using respective domain-specific logic rules selected based on identification of a data source that stores the training data, (Griffith, ⁋98, “In at least some examples, filter types 658 a, 658 b, and 658 n are implemented as probabilistic filters (e.g., Bloom filters) each configured to determine whether a subset of data is either “likely” or “definitely not” in a set of data. Likely subsets of data may be included in data files 690. In some examples, a stream of data representing a column 656 may be processed to compress subsets of data (e.g., via hashing) to apply to each of filter types 658 a, 658 b, and 658 n.”; the probabilistic filters is interpreted as a set of logic rules for a first source data, or the column data (i.e. filtering the training data using respective domain-specific logic rules selected based on identification of a data source that stores the training data,)). the filtering includes removing parameters that are non-varying or have one or more missing values. (Griffith, ⁋159, “In another example, data representing another property can define an anomaly as “a duplicated row of data” in dataset 2305 a. In this case, the value of the data attribute is extracted from dataset 2305 a and matched against other fields or cells in rows of 2305 a… While not shown, data remediation interface 2302 may present a user input selection with which interface 2302 may invoke an action to modify dataset 2305 a to remediate the condition, such as deleting the duplicate row of data [the filtering includes removing parameters that are non-varying].”). Kim, in view of Wang, and Griffith are both in the same field of endeavor (i.e. data collection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang, and Griffith to teach the above limitation(s). The motivation for doing so is removing anomalous data points increases the reliability of the dataset (cf. Griffith, ⁋7-8, “Thus, conventional approaches are less effective in data “wrangling” (i.e., cleaning and integrating ‘messy’ and ‘sophisticated’ data arrangements), which, in turn causes formation of unreliable data sets. Unfortunately, the relative unreliability of conventional techniques to remove defects in data thereby reduces others' confidence in using such data, which frustrates or impedes the repurposing or sharing of a dataset generated by the aforementioned techniques. Thus, what is needed is a solution for facilitating techniques to optimize linking of datasets, without the limitations of conventional techniques.”). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”) and further in view of Griffith, et al., US Pre-Grant Publication 2019/0050445A1 (“Griffith”) and Oltean, et al., Non-Patent Literature “Computational Intelligence and Wavelet Transform Based Metamodel for Efficient Generation of Not-Yet Simulated Waveforms” (“Oltean”). Regarding claim 7, Kim in view of Wang and Griffith teaches the semiconductor system of claim 6. Kim further teaches wherein the operations further comprise: evaluating the accuracy of at least a portion of the first neural network or the second neural network based on the test data, as seen in claim 6. While Kim in view of Wang and Griffith teaches evaluating the accuracy of the first and second neural networks based on the filtered test data, the combination does not explicitly teach including generating at least one quality check graph that depicts predicted values for the first performance metric in view of ground truth values for the first performance metric. Oltean teaches including generating at least one quality check graph that depicts predicted values for the first performance metric in view of ground truth values for the first performance metric. (Oltean, see Figure 8 below, PNG media_image1.png 746 1627 media_image1.png Greyscale Figure 8 shows the validation graph for a neural network model and the act of generating the validation graphs is interpreted as the testing engine. The red lines are interpreted as the predicted values from the test data and the green lines are interpreted as the ground truth values (i.e. including generating at least one quality check graph that depicts predicted values for the first performance metric in view of ground truth values for the first performance metric.)). Kim, in view of Wang and Griffith, and Oltean are both in the same field of endeavor (i.e. machine learning). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang and Griffith, and Oltean to teach the above limitation(s). The motivation for doing so is to increase efficiency of training models (cf. Oltean, Introduction, “The resulted metamodels are very accurate and fast to evaluate, offering new possibilities to the designers to efficiently analyze the full design space, in the quest for optimal design solutions.”). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”) and further in view of Griffith, et al., US Pre-Grant Publication 2019/0050445A1 (“Griffith”), Saqlain, et al., Non-Patent Literature “A Deep Convolutional Neural Network for Wafer Defect Identification on an Imbalanced Dataset in Semiconductor Manufacturing Processes” (“Saqlain”) and Cesare, et al., US Pre-Grant Publication 2006/0195489A1 (“Cesare”). Regarding claim 14, Kim in view of Wang and Griffith teaches the non-transitory computer-readable medium of claim 13. Kim further teaches wherein the plurality of data sources include a first data source storing technology computer- aided design (TCAD) simulation data, (Kim, ⁋54, “First, referring to FIG. 9, operations of a method of training a machine learning model according to an example embodiment of the present inventive concept may begin by collecting a sample data set for training (S30). The sample data set for training can be collected by using a simulation tool based on a physical model.” And Kim, ⁋27, “For example, in order to accurately predict the electrical characteristics of semiconductor devices, a machine learning model may be trained in advance by using the electrical characteristics of semiconductor devices predicted using a simulation tool such as a Technology Computer Aided Design (TCAD) [wherein the plurality of data sources include a first data source storing technology computer- aided design (TCAD) simulation data,]”). Wang further teaches a second data source storing simulation program with integrated circuit emphasis (SPICE) variables, (Wang, ⁋66, “the computing system runs a simulation (e.g., a SPICE simulation) of a circuit [a second data source storing simulation program with integrated circuit emphasis (SPICE) variables,], where the simulation uses the ported neural network model to simulate the behavior of at least one element of the circuit.”). While Kim in view of Wang and Griffith teaches the plurality of data sources including a TCAD and SPICE data sources, the combination does not explicitly teach a third data source storing a power electronics laboratory test data, and a fourth source storing wafer-level measurement data, wherein a separate set of logic rules are applied to each of the TCAD simulation data, the SPICE variables, the power electronics laboratory test data, and the wafer-level measurement data. Saqlain teaches a third data source storing a power electronics laboratory test data, and a fourth source storing wafer-level measurement data, (Saqlain, Section 2, “The WM-811K dataset is a semiconductor dataset which consists of 811,457 real WM images [3]. The wafer images were collected from 46,293 lots in a circuit probe (CP) test of semiconductor fabrication process. A single lot contains 25 WMs, so there should be 1,157,325 WMs in total (i.e., 46,293 lots × 25 wafer/lot). Since not all lots have exact 25 WMs due to some sensor faults or other unknown reasons, they were pruned from the dataset. The dataset also contains additional information about each WM such as lot name, die size, wafer index number, failure types, and training/test labels. This is the largest publicly available WM dataset that can be accessed at the MIR laboratory [a third data source storing a power electronics laboratory test data,] website [14]. There are different sizes of wafer images because of their two-dimensional nature and having different pixel values along the length and width of the image [and a fourth source storing wafer-level measurement data,]. We found total of 632 various sizes of wafer images ranging from (6×21) to (300×202). Domain experts were responsible for defining nine different defect classes of WMs and assigning manual labels to 172,950 (21.3%) WMs in whole dataset.”). Kim, in view of Wang and Griffith, and Saqlain are both in the same field of endeavor (i.e. semiconductor manufacturing). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang and Griffith, and Saqlain to teach the above limitation(s). The motivation for doing so is that having more data improves the semiconductor fabrication process (cf. Saqlain, Section 1, “Accurate classification of WM patterns plays an important role in identification of wafer defects, which will enhance the semiconductor yield and quality by improving the wafer fabrication process.”). While Kim in view of Wang, Griffith, and Saqlain teaches the plurality of data sources, the combination does not explicitly teach wherein a separate set of logic rules are applied to each of the TCAD simulation data, the SPICE variables, the power electronics laboratory test data, and the wafer-level measurement data. Cesare teaches wherein a separate set of logic rules are applied to each of the TCAD simulation data, the SPICE variables, the power electronics laboratory test data, and the wafer-level measurement data. (Cesare, ⁋15, “At least one rule definition is processed to clean the input table. Each rule definition indicates a find criteria, a replacement value, and an input data column in the input table. The rule definition comprises a type of rule that is a member of the set of rules consisting of: find and replace, discretization, and numeric clip, and at least two rule definitions are comprised of different rule types; a rule definition for each column is interpreted as a different rule for different data sources (i.e. wherein a separate set of logic rules are applied to each of the TCAD simulation data, the SPICE variables, the power electronics laboratory test data, and the wafer-level measurement data.). For each rule definition, the input data column is searched for any fields that match the find criteria.”). Kim, in view of Wang, Griffith, and Saqlain, and Cesare are both in the same field of endeavor (i.e. data processing). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang, Griffith, and Saqlain, and Cesare to teach the above limitation(s). The motivation for doing so is that different rules are required for different data when performing data cleansing (cf. Cesare, ⁋12, “Data transformations and cleansing is used when data is inconsistent or incompatible between sources. In such case, some level of data cleansing is needed to ensure data consistency and accuracy.”). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”) and further in view of Griffith, et al., US Pre-Grant Publication 2019/0050445A1 (“Griffith”) and Cesare, et al., US Pre-Grant Publication 2006/0195489A1 (“Cesare”). Regarding claim 15, Kim in view of Wang and Griffith teaches the non-transitory computer-readable medium of claim 13. Kim further teaches wherein the operations further comprise: identifying that the source data includes first source data from a first data source of the plurality of data sources and second source data from a second data source of the plurality of data sources; (Kim, ⁋54, “First, referring to FIG. 9, operations of a method of training a machine learning model according to an example embodiment of the present inventive concept may begin by collecting a sample data set for training (S30). The sample data set for training can be collected by using a simulation tool based on a physical model. For example, process parameters for controlling a semiconductor process and design parameters representing a structure of a semiconductor device; the process and design parameters are interpreted as the first data source and the second data source (i.e. identifying that the source data includes first source data from a first data source of the plurality of data sources and second source data from a second data source of the plurality of data sources;) to be formed by a semiconductor process may be input into a simulation tool, and reference data representing characteristics of the semiconductor device may be obtained. Process parameters and reference data corresponding thereto, and design parameters and reference data corresponding thereto may be included in the sample data set.”). Griffith further teaches: selecting a first set of logic rules from the domain knowledge database that corresponds to the first data source;…filtering the first source data by applying the first set of logic rules to the first source data; (Griffith, ⁋98, “In at least some examples, filter types 658 a, 658 b, and 658 n are implemented as probabilistic filters (e.g., Bloom filters) each configured to determine whether a subset of data is either “likely” or “definitely not” in a set of data. Likely subsets of data may be included in data files 690. In some examples, a stream of data representing a column 656 may be processed to compress subsets of data (e.g., via hashing) to apply to each of filter types 658 a, 658 b, and 658 n.”; the probabilistic filter is interpreted as selecting a set of logic rules and the column of data is interpreted as the source data (i.e. selecting a first set of logic rules from the domain knowledge database that corresponds to the first data source;…filtering the first source data by applying the first set of logic rules to the first source data;)). selecting a second set of logic rules from the domain knowledge database that corresponds to the second data source;…and filtering the second source data by applying the second set of logic rules to the second source data, (Griffith, ⁋96, “Further, inference engine 632 may be configured to further include a subset characterizer 657 and a match filter 658, either or both of which may be implemented. According to various examples, subset characterizer 657 and match filter 658 each may be configured to classify units of data in, for example, a column 656 to determine one or more of a datatype; determining one or more datatypes is interpreted as the same filtering steps applied to the first source data is applied to a second source data (i.e. selecting a second set of logic rules from the domain knowledge database that corresponds to the second data source;…and filtering the second source data by applying the second set of logic rules to the second source data,), a categorical variable, or any dataset attribute associated with column 656. In one or more implementations, elements depicted in diagram 600 of FIG. 6 may include structures and/or functions as similarly-named or similarly-numbered elements depicted in other drawings.”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Griffith with the teachings of Kim and Wang for the same reasons disclosed in claim 13. While Kim in view of Wang and Griffith teaches the plurality of filtered data sources including a first and second data sources having sets of logic rules, the combination does not explicitly teach the second set of logic rules having at least one rule that is different from the first set of logic rules Cesare teaches the second set of logic rules having at least one rule that is different from the first set of logic rules (Cesare, ⁋15, “At least one rule definition is processed to clean the input table. Each rule definition indicates a find criteria, a replacement value, and an input data column in the input table. The rule definition comprises a type of rule that is a member of the set of rules consisting of: find and replace, discretization, and numeric clip, and at least two rule definitions are comprised of different rule types; a rule definition for each column is interpreted as a different rule for different data sources (i.e. the second set of logic rules having at least one rule that is different from the first set of logic rules). For each rule definition, the input data column is searched for any fields that match the find criteria.”). Kim, in view of Wang and Griffith, and Cesare are both in the same field of endeavor (i.e. data processing). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang and Griffith, and Cesare to teach the above limitation(s). The motivation for doing so is that different rules are required for different data when performing data cleansing (cf. Cesare, ⁋12, “Data transformations and cleansing is used when data is inconsistent or incompatible between sources. In such case, some level of data cleansing is needed to ensure data consistency and accuracy.”). Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”) and further in view of Maseeh, et al., US Patent Publication 6116766A (“Maseeh”). Regarding claim 24, Kim in view of Wang teaches the semiconductor design system of claim 1. While Kim in view of Wang teaches the use of multiple predictive models for predicting multiple performance metrics using simulated circuit inputs, the combination does not explicitly teach wherein the design model includes one or more visual objects that graphically represent process-level, device-level, or circuit-level parameters of the semiconductor device based on the set of input parameters. Maseeh teaches wherein the design model includes one or more visual objects that graphically represent process-level, device-level, or circuit-level parameters of the semiconductor device based on the set of input parameters. (Maseeh, col. 5 lines 49-58, “Because the present invention models the device at the completion of any of the steps in the fabrication process as well as the device at the completion of the fabrication process, the device may also be visualized upon the completion of any of the individual steps in the Process Table. Visualizing the device after the completion of any of the individual processes yields a true representation of what the geometry and behavior of the actual device will be upon the completion of any of the actual fabrication processes that are performed on it [wherein the design model includes one or more visual objects that graphically represent process-level…parameters of the semiconductor device based on the set of input parameters.].”). Kim, in view of Wang, and Maseeh are both in the same field of endeavor (i.e. semiconductor fabrication). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang, and Maseeh to teach the above limitation(s). The motivation for doing so is the visual representation of each fabrication process improves designer visibility into the whole fabrication process (cf. Maseeh, col. 5 lines 54-58, “Visualizing the device after the completion of any of the individual processes yields a true representation of what the geometry and behavior of the actual device will be upon the completion of any of the actual fabrication processes that are performed on it.”). Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Kim, et al., US Pre-Grant Publication 2022/0043405A1 (“Kim”) in view of Wang, et al., US Pre-Grant Publication 2020/0320366A1 (“Wang”) and further in view of Wang, et al., US Patent Publication 2019/0376850A1 (“Wang2”). Regarding claim 28, Kim in view of Wang teaches the non-transitory computer-readable medium of claim 11. While Kim in view of Wang teaches the use of multiple predictive models for predicting multiple performance metrics using simulated circuit inputs, the combination does not explicitly teach wherein the plurality of predictive models include at least one predictive model configured to predict a thermal impedance, an electromagnetic emission signature, or a package parasitic. Wang2 teaches wherein the plurality of predictive models include at least one predictive model configured to predict a thermal impedance, an electromagnetic emission signature, or a package parasitic. (Wang2, ⁋19, “One approach to determine the T.sub.j may be to estimate the T.sub.j based on the predicted performance and/or thermal properties of the semiconductor device (e.g., the junction temperature is estimated using predicted device loss with estimated thermal impedance) [wherein the plurality of predictive models include at least one predictive model configured to predict a thermal impedance,].”). Kim, in view of Wang, and Wang2 are both in the same field of endeavor (i.e. semiconductor performance). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Kim, in view of Wang, and Wang2 to teach the above limitation(s). The motivation for doing so is that the internal temperature of a semiconductor device impacts the performance of the device (cf. Wang2, ⁋18, “The present disclosure generally encompasses systems and methods for monitoring a junction temperature, T.sub.j, of a semiconductor device, such as a semiconductor switch. Because the performance of a semiconductor device is typically at least somewhat temperature sensitive, it is desirable to monitor the junction temperature of the semiconductor device”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sawlani, et al., US2023/0049157A1 discloses using a plurality of machine learning models for each specific characteristic and then combining the plurality of predictions to design a semiconductor. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS S WU whose telephone number is (571)270-0939. The examiner can normally be reached Monday - Friday 8:00 am - 4:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached on 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.S.W./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Dec 30, 2020
Application Filed
Dec 13, 2023
Non-Final Rejection — §103, §112
Apr 02, 2024
Interview Requested
Apr 12, 2024
Applicant Interview (Telephonic)
Apr 12, 2024
Examiner Interview Summary
Apr 22, 2024
Response Filed
Jul 02, 2024
Final Rejection — §103, §112
Aug 26, 2024
Interview Requested
Sep 03, 2024
Examiner Interview Summary
Sep 03, 2024
Applicant Interview (Telephonic)
Sep 11, 2024
Response after Non-Final Action
Oct 11, 2024
Request for Continued Examination
Oct 22, 2024
Response after Non-Final Action
Mar 27, 2025
Non-Final Rejection — §103, §112
Jun 20, 2025
Interview Requested
Jun 27, 2025
Applicant Interview (Telephonic)
Jun 27, 2025
Examiner Interview Summary
Jul 07, 2025
Response Filed
Oct 09, 2025
Final Rejection — §103, §112
Jan 07, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488244
APPARATUS AND METHOD FOR DATA GENERATION FOR USER ENGAGEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12423576
METHOD AND APPARATUS FOR UPDATING PARAMETER OF MULTI-TASK MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Sep 23, 2025
Patent 12361280
METHOD AND DEVICE FOR TRAINING A MACHINE LEARNING ROUTINE FOR CONTROLLING A TECHNICAL SYSTEM
2y 5m to grant Granted Jul 15, 2025
Patent 12354017
ALIGNING KNOWLEDGE GRAPHS USING SUBGRAPH TYPING
2y 5m to grant Granted Jul 08, 2025
Patent 12333425
HYBRID GRAPH NEURAL NETWORK
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
47%
Grant Probability
90%
With Interview (+43.1%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month