Prosecution Insights
Last updated: April 19, 2026
Application No. 18/368,311

METHODS AND MECHANISMS FOR TRACE-BASED TRANSFER LEARNING

Non-Final OA §101§102
Filed
Sep 14, 2023
Examiner
VINCENT, DAVID ROBERT
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Applied Materials, Inc.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
568 granted / 706 resolved
+25.5% vs TC avg
Minimal +4% lift
Without
With
+3.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
27 currently pending
Career history
733
Total Applications
across all art units

Statute-Specific Performance

§101
31.0%
-9.0% vs TC avg
§103
35.4%
-4.6% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 706 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1- 23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: claims 1- 19 are directed to either a process, machine, manufacture or composition of matter. With respect to claim s 1 and 13 : 2A Prong 1 : identifying a machine-learning model trained to generate analytic or predictive data for a first substrate processing domain associated with a type of substrate processing system; (encompasses mental observations or evaluations, e.g., a computer programmer’s mental identification of data); generating a transfer model for a second substrate processing domain associated with the type of substrate processing system, wherein the transfer model is generated based on the first trace data (collected data) pertaining to the first substrate processing domain and second trace data pertaining to the second substrate processing domain (mental process of modeling with assistance of pen and paper ; A human- mind with pen and paper can generate/determine data /model ); modifying, using the transfer model, at least one of the machine-learning model or current trace data (modifying collected data) associated with the second substrate processing domain “ to enable ” (intended use) the machine-learning model to generate analytic or predictive data associated with the second substrate processing domain (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data mental process of modeling with assistance of pen and paper ) . 2A Prong 2 : This judicial exception is not integrated into a practical application. Additional elements: A system, memory device , processing device, ( computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component; the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 134 S. Ct. at 2358 ); obtaining first trace data (collected data) pertaining to the first substrate processing domain, (mere data gathering and output recited at a high level of generality - insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)) ; input to a transfer model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) ; the first trace data (collected data) used to train the machine-learning model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data ) . 2B : The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: A system, memory device , processing device, ( computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component; the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 134 S. Ct. at 2358 ); obtaining first trace data (collected data) pertaining to the first substrate processing domain, (mere data gathering and output recited at a high level of generality - insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)); input to a transfer model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)) ; the first trace data (collected data) used to train the machine-learning model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level recitation of training a machine learning model with previously determined data ). Further, the obtaining step w as considered to be extra-solution activity in Step 2A Prong 2, and thus it is re-evaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The receiving and/or transmitting limitations constitute extra-solution activity. See buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014) ("That a computer receives and sends the information over a network-with no further specification-is not even arguably inventive."). The court decisions cited in MPEP 2106.05(d)(II) indicate that merely Receiving and/or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). Thereby, a conclusion that the claimed receiving/transmitting steps are well-understood, routine, conventional activity is supported under Berkheimer . The claim is not patent eligible. 2. The method of claim 1, wherein the first substrate processing domain comprises a first process chamber ( a domain refers to a process chamber ) and the second substrate processing domain comprises a second process chamber, wherein the first process chamber and the second process chamber are a same type of process chamber (further expand mental process user can model data using different inputs ) . 3. The method of claim 1, wherein the first substrate processing domain comprises a first process recipe and the second substrate processing domain comprise a second process recipe (further expand mental process user can model data using different inputs) . 4. The method of claim 1, wherein the first trace data (collected data) comprises a first set of traces associated with the first substrate processing domain and the second trace data comprises a second set of traces associated with the second substrate processing domain (further expand mental process user can model data using different inputs) . 5. The method of claim 4, further comprising: generating, from the first set of traces, a first fundamental trace; and generating, from the second set of traces, a second fundamental trace (further expand mental process user can model data using different inputs) . 6. The method of claim 5, further comprising: generating, based on the first fundamental trace and the second fundamental trace, a transfer map reflecting a relationship between the first fundamental trace and the second fundamental trace (further expand mental process user can model data using different inputs ; modeling with assistance of pen and paper) . 7. The method of claim 6, where the transfer map (can be paper and pencil) provides feature-based scaling in reflecting the relationship between the first fundamental trace and second fundamental trace (further expand mental process user can model data using different inputs; modeling with assistance of pen and paper) . 8. The method of claim 6, wherein the transfer map is used to generate the transfer model (further expand mental modeling with assistance of pen and paper) . 9. The method of claim 1, further comprising: providing, as input to the transfer model, current trace data pertaining to the second substrate processing domain (data gathering) ; obtaining one or more first output values of the transfer model (data gathering) ; providing, as input to the machine-learning model, the one or more first output values ( receiving/transmitting ) ; and obtaining one or more second output values of the machine learning model, the one or more second output values reflecting the analytic or predictive data associated with the second substrate processing domain ( receiving/transmitting steps are considered to be extra-solution activity ) . 10. The method of claim 9, further comprising: performing a corrective action based on the one or more second output values of the machine-learning model ( using the trained machine learning model to make corrections (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of a previously trained model to make a prediction) . 11. The method of claim 1, further comprising: retraining the machine-learning model using the transfer model (the court finds that this training is generic and summarily states the process of the training a model is required for any e.g., machine learning model. Using a machine learning technique necessarily includes an iterative step training step; iterative training using selected training material and/or dynamic adjustments based on changes are incident to the very nature of machine learning); providing, as input to the retrained machine-learning model, current trace data pertaining to the second substrate processing domain; and obtaining one or more output values of the retrained machine-learning model reflecting the analytic or predictive data associated with the second substrate processing domain ( receiving/transmitting steps are considered to be extra-solution activity ) . 12. The method of claim 11, further comprising: performing a corrective action based on the one or more output values of the machine-learning model (using the trained machine learning model to make corrections ) . 14. The system of claim 13, wherein the first trace data comprises a first set of traces associated with the first substrate processing domain and the second trace data comprises a second set of traces associated with the second substrate processing domain ( receiving/transmitting steps are considered to be extra-solution activity ) . 15. The system of claim 14, wherein the operations further comprise: generating, from the first set of traces, a first fundamental trace; and generating, from the second set of traces, a second fundamental trace (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data) . 16. The system of claim 15, wherein the operations further comprise: generating, based on the first fundamental trace and the second fundamental trace, a transfer map reflecting a relationship between the first fundamental trace and the second fundamental trace (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data) . 17. The system of claim 13, wherein the operations further comprise: providing, as input to the transfer model, current trace data pertaining to the second substrate processing domain; obtaining one or more first output values of the transfer model; providing, as input to the machine-learning model, the one or more first output values; and obtaining one or more second output values of the machine learning model, the one or more second output values reflecting the analytic or predictive data associated with the second substrate processing domain ( receiving/transmitting steps are considered to be extra-solution activity ) . 18. The system of claim 11, wherein the operations further comprise: retraining the machine-learning model using the transfer model; providing, as input to the retrained machine-learning model, current trace data pertaining to the second substrate processing domain; and obtaining one or more output values of the retrained machine-learning model reflecting the analytic or predictive data associated with the second substrate processing domain ( receiving/transmitting steps are considered to be extra-solution activity ) . 19. The system of claim 18, wherein the operations further comprise: performing a corrective action based on the one or more output values (using a model) . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2 3 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li (US 2023/0376373) . The applied reference has a common inventor and assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. Li (US 2023/0376373) discloses: 13 , 1 . A system, comprising: a memory device (e.g., data store, 140, Fig. 1 or any inherent computer/server memories; 904, 906, 918, Fig. 9) ; and a processing device (e.g., servers/clients , 1 10, 170, 112, 120 , Fig. 1 ; 902, Fig. 9) , operatively coupled to the memory device, to perform operations comprising: identifying a machine-learning model (e.g., models, 190; 0036; 0040) trained (e.g., 746, Fig. 7C; “ generate predictive data 168 using supervised machine learning (e.g., supervised data set, performance data 150 includes metrology data, the trace data 142 used to train a model 190 is associated with good substrates and bad substrates, etc.). ”, 0036) to generate analytic (any data output by models 0036) or predictive data (0036) for a first substrate ( substrates, 0035-0036) processing domain associated with a type of substrate processing system (“ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240) ; obtaining first trace data pertaining to the first substrate processing domain ( “ new trace data for substrates where it is to be determined whether the substrates are good or bad ”, 0220; 0223, 0238) , the first trace data used to train the machine-learning model (“ At block 746, processing logic identifies a training set of trace data ”, 0222, 0223) ; generating a transfer model ( models that can be used in different domains or for different substrates, 0197-0199; 0150; 0152 e.g., 190, Fig. 1) for a second substrate processing domain (“ new trace data includes new sensor data associated with producing new substrates with the same substrate processing equipment as block 746 or with different substrate processing equipment ”, 0223 or historical trace data, 144, Fig. 1) associated with the type of substrate processing system, wherein the transfer model is generated based on the first trace data pertaining to the first substrate processing domain and second trace data pertaining to the second substrate processing domain ( “ Trace data may include sets of sensor data associated with production of different substrates and from different types of sensors ”, 0021; “ new trace data includes new sensor data associated with producing new substrates with the same substrate processing equipment as block 746 or with different substrate processing equipment ”, 0223 or historical trace data, 144, Fig. 1) ; and modifying, using the transfer model, at least one of the machine-learning model or current trace data associated with the second substrate processing domain to enable the machine-learning model to generate analytic or predictive data associated with the second substrate processing domain ( plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240 ; “ aspects of the disclosure describe the training of one or more machine learning models 190 using historical data (e.g., historical trace data 144, historical performance data 152) and inputting current data (e.g., current trace data 146) into the one or more trained machine learning models 190 to determine predictive data 168. ”, 0064; “ the corrective action includes updating a process recipe to produce subsequent substrates ”, 0046 ) . 2. The method of claim 1, wherein the first substrate processing domain comprises a first process chamber and the second substrate processing domain comprises a second process chamber, wherein the first process chamber and the second process chamber are a same type of process chamber ( domain can refer to e.g., process chamber or process recipe ; plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240 ) . 3. The method of claim 1, wherein the first substrate processing domain comprises a first process recipe and the second substrate processing domain comprise a second process recipe ( domain can refer to e.g., process chamber or process recipe ; plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240) . 4. The method of claim 1, wherein the first trace data comprises a first set of traces associated with the first substrate processing domain and the second trace data comprises a second set of traces associated with the second substrate processing domain ( domain can refer to e.g., process chamber or process recipe ; plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240) . 5. The method of claim 4, further comprising: generating, from the first set of traces, a first fundamental trace; and generating, from the second set of traces, a second fundamental trace ( domain can refer to e.g., process chamber or process recipe ; plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A- B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240) . 6. The method of claim 5, further comprising: generating, based on the first fundamental trace and the second fundamental trace, a transfer map reflecting a relationship between the first fundamental trace and the second fundamental trace ( domain can refer to e.g., process chamber or process recipe ; plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240) . 7. The method of claim 6, where the transfer map provides feature-based scaling in reflecting the relationship between the first fundamental trace and second fundamental trace ( scaling, 0237-0249; domain can refer to e.g., process chamber or process recipe ; plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240) . 8. The method of claim 6, wherein the transfer map is used to generate the transfer model (e.g., “ The training engine 182 may be capable of training machine learning model 190 or various machine learning models included in model 190 using one or more sets of elements associated with the training set from data set generator 172. The training engine 182 may generate multiple trained machine learning models 190, where each trained machine learning model 190 corresponds to a distinct set of elements of the training set (e.g., sensor data from a distinct set of sensors). For example, a first trained machine learning model may have been trained using all elements (e.g., X1-X5), a second trained machine learning model may have been trained using a first subset of elements (e.g., X1, X2, X4), and a third trained machine learning model may have been trained using a second subset of elements (e.g., X1, X3, X4, and X5) that may partially overlap the first subset of elements. ”, 0057-0064) . 9. The method of claim 1, further comprising: providing, as input to the transfer model, current trace data pertaining to the second substrate processing domain; obtaining one or more first output values of the transfer model; providing, as input to the machine-learning model, the one or more first output values; and obtaining one or more second output values of the machine learning model, the one or more second output values reflecting the analytic or predictive data associated with the second substrate processing domain (plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240; “ aspects of the disclosure describe the training of one or more machine learning models 190 using historical data (e.g., historical trace data 144, historical performance data 152) and inputting current data (e.g., current trace data 146) into the one or more trained machine learning models 190 to determine predictive data 168. ”, 0064; “ the corrective action includes updating a process recipe to produce subsequent substrates ”, 0046) . 10. The method of claim 9, further comprising: performing a corrective action based on the one or more second output values of the machine-learning model ( “ the corrective action includes updating a process recipe to produce subsequent substrates ”, 0046 ; Figs. 2 ) . 11. The method of claim 1, further comprising: retraining the machine-learning model using the transfer model ( “ Sensor data associated with substrate processing operations is collected over time ” 0018; “ Responsive to the confidence data indicating a level of confidence below a threshold level for a predetermined number of instances (e.g., percentage of instances, frequency of instances, frequency of occurrence, total number of instances, etc.) the predictive component 114 may cause model 190 to be re-trained (e.g., based on current trace data 146, manufacturing parameters, current performance data 154, etc.). ”, 0063; models are trained and retrained in order to optimize or reach convergence, “ the selection engine 185 may be capable of selecting the trained machine learning model 190 that has the highest accuracy of the trained machine learning models 190. In some embodiments, validation engine 184 and selection engine 185 may repeat this process for each machine learning model include in model 190. ”, 0058, 0060) ; providing, as input to the retrained machine-learning model, current trace data pertaining to the second substrate processing domain; and obtaining one or more output values of the retrained machine-learning model reflecting the analytic or predictive data associated with the second substrate processing domain (“ Responsive to the confidence data indicating a level of confidence below a threshold level for a predetermined number of instances (e.g., percentage of instances, frequency of instances, frequency of occurrence, total number of instances, etc.) the predictive component 114 may cause model 190 to be re-trained (e.g., based on current trace data 146, manufacturing parameters, current performance data 154, etc.). ”, 0063; “ the selection engine 185 may be capable of selecting the trained machine learning model 190 that has the highest accuracy of the trained machine learning models 190. In some embodiments, validation engine 184 and selection engine 185 may repeat this process for each machine learning model include in model 190. ”, 0058, 0060) . 12. The method of claim 11, further comprising: performing a corrective action based on the one or more output values of the machine-learning model ( e.g., “ the corrective action includes updating a process recipe to produce subsequent substrates ”, 0046 ; Figs. 2 ) . 14. The system of claim 13, wherein the first trace data comprises a first set of traces (collected/senor data “ sensor data is summarized across a particular recipe or recipe operation associated with the production of a substrate by equipment ”, 0019; “ Trace data may include sets of sensor data associated with production of different substrates and from different types of sensors. ”, 0021) associated with the first substrate processing domain and the second trace data comprises a second set of traces associated with the second substrate processing domain (“ new trace data for substrates where it is to be determined whether the substrates are good or bad ”, 0220; 0223, 0238plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240; “ aspects of the disclosure describe the training of one or more machine learning models 190 using historical data (e.g., historical trace data 144, historical performance data 152) and inputting current data (e.g., current trace data 146) into the one or more trained machine learning models 190 to determine predictive data 168. ”, 0064; “ the corrective action includes updating a process recipe to produce subsequent substrates ”, 0046) . 15. The system of claim 14, wherein the operations further comprise: generating, from the first set of traces, a first fundamental trace; and generating, from the second set of traces, a second fundamental trace (collected/senor data “ sensor data is summarized across a particular recipe or recipe operation associated with the production of a substrate by equipment ”, 0019; “ Trace data may include sets of sensor data associated with production of different substrates and from different types of sensors. ”, 0021) . 16. The system of claim 15, wherein the operations further comprise: generating, based on the first fundamental trace and the second fundamental trace, a transfer map reflecting a relationship between the first fundamental trace and the second fundamental trace (collected/senor data “ sensor data is summarized across a particular recipe or recipe operation associated with the production of a substrate by equipment ”, 0019; “ Trace data may include sets of sensor data associated with production of different substrates and from different types of sensors. In some embodiments, the data is analyzed at the trace level (e.g., as opposed to just providing summary statistics of a sensor across a recipe or recipe operation) by using guardbands. ”, 0021 ; domain can refer to e.g., process chamber or process recipe ; plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240) . 17. The system of claim 13, wherein the operations further comprise: providing, as input to the transfer model, current trace data pertaining to the second substrate processing domain; obtaining one or more first output values of the transfer model (collected/senor data “ Sensor data associated with substrate processing operations is collected over time ”, 0018 ; “ sensor data is summarized across a particular recipe or recipe operation associated with the production of a substrate by equipment ”, 0019 ) ; providing, as input to the machine-learning model, the one or more first output values; and obtaining one or more second output values of the machine learning model, the one or more second output values reflecting the analytic or predictive data associated with the second substrate processing domain ( models are trained and retrained in order to optimize or reach convergence, “ the selection engine 185 may be capable of selecting the trained machine learning model 190 that has the highest accuracy of the trained machine learning models 190. In some embodiments, validation engine 184 and selection engine 185 may repeat this process for each machine learning model include in model 190. ”, 0058, 0060 ) . 18. The system of claim 11, wherein the operations further comprise: retraining the machine-learning model using the transfer model; providing, as input to the retrained machine-learning model, current trace data pertaining to the second substrate processing domain; and obtaining one or more output values of the retrained machine-learning model reflecting the analytic or predictive data associated with the second substrate processing domain ( models are trained and retrained in order to optimize or reach convergence, “ the selection engine 185 may be capable of selecting the trained machine learning model 190 that has the highest accuracy of the trained machine learning models 190. In some embodiments, validation engine 184 and selection engine 185 may repeat this process for each machine learning model include in model 190. ”, 0058, 0060 ) . 19. The system of claim 18, wherein the operations further comprise: performing a corrective action based on the one or more output values ( e.g., “ the corrective action includes updating a process recipe to produce subsequent substrates ”, 0046 ; Figs. 2 ) . 20. A method, comprising: providing, as input to a transfer model, current trace data associated with a target substrate processing domain, wherein the transfer model is generated based on historical trace data associated with the target substrate processing domain and historical trace data associated with a source substrate processing domain (e.g., 144, Fig. 1; “ identifying trace data including a plurality of data points, the trace data being associated with production, via a substrate processing system, of substrates. The method further includes comparing the trace data to a guardband generated based on historical trace data and a plurality of allowable types of variance associated with the historical trace data, the historical trace data being associated with historical production, via the substrate processing system, of historical substrates having historical property values that meet threshold values, the guardband including an upper limit and a lower limit for fault detection ”, 0005) , wherein the source substrate processing domain and the target substrate processing domain are both associated with a type of substrate processing system; obtaining one or more first output values of the transfer model reflective of the current trace data modified by a set of offset values; providing, as input to a machine-learning model trained to generate analytic or predictive data for the source substrate processing domain, the one or more first output values from the transfer model; and obtaining one or more second output values of the machine learning model, the one or more second output values representing analytic or predictive data associated with the target substrate processing domain (plurality of models that are trained and retrained to reach optimization or convergence , 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240; “ aspects of the disclosure describe the training of one or more machine learning models 190 using historical data (e.g., historical trace data 144, historical performance data 152) and inputting current data (e.g., current trace data 146) into the one or more trained machine learning models 190 to determine predictive data 168. ”, 0064) . 21. The method of claim 20, further comprising: performing a corrective action based on the one or more second output values of the machine-learning model (e.g., “ the corrective action includes updating a process recipe to produce subsequent substrates ”, 0046) . 22. A method, comprising: retraining a machine-learning model using a transfer model, wherein the transfer model is generated based on historical trace data associated with a target substrate processing domain and historical trace data associated with a source substrate processing domain (“ Sensor data associated with substrate processing operations is collected over time ”, 0018 ; models are trained and retrained in order to optimize or reach convergence, “ the selection engine 185 may be capable of selecting the trained machine learning model 190 that has the highest accuracy of the trained machine learning models 190. In some embodiments, validation engine 184 and selection engine 185 may repeat this process for each machine learning model include in model 190. ”, 0058, 0060) , wherein the source substrate processing domain and the target substrate processing domain are associated with a type of substrate processing system, wherein the machine-learning model is trained to generate analytic or predictive data for the source substrate processing domain (predictive, Fig. 9 or 168, Fig. 1) ; providing, as input to the retrained machine-learning model, current trace data pertaining to the target substrate processing domain; and obtaining one or more output values of the retrained machine-learning model reflecting analytic or predictive data associated with the target substrate processing domain (plurality of models, 0057-8; models for different domains, “ The present disclosure addresses false and missed positives and is adaptive to provide robustness over time and flexibility to address different domains (e.g., see FIGS. 8A-B). ”, 0072; “ Vertical and horizontal scaling is applied to address domain transfer (e.g., applying guardband to a different domain, such as a different recipe) ”, 0240; “ aspects of the disclosure describe the training of one or more machine learning models 190 using historical data (e.g., historical trace data 144, historical performance data 152) and inputting current data (e.g., current trace data 146) into the one or more trained machine learning models 190 to determine predictive data 168. ”, 0064) . 23. The method of claim 22, further comprising: performing a corrective action based on the one or more second output values of the modified machine-learning model (e.g., “ the corrective action includes updating a process recipe to produce subsequent substrates ”, 0046 ; 208, 228, Figs. 2 ) . Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT DAVID R VINCENT whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-3080 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT ~Mon-Fri 12-8:30 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Alexey Shmatov can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 5712703428 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID R VINCENT/ Primary Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Sep 14, 2023
Application Filed
Mar 25, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602916
OBJECT MODELING WITH ADVERSARIAL LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12585949
SYSTEM AND METHOD FOR DESIGNING EFFICIENT SUPER RESOLUTION DEEP CONVOLUTIONAL NEURAL NETWORKS BY CASCADE NETWORK TRAINING, CASCADE NETWORK TRIMMING, AND DILATED CONVOLUTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12572951
DISTRIBUTED MACHINE LEARNING DECENTRALIZED APPLICATION PLATFORM
2y 5m to grant Granted Mar 10, 2026
Patent 12524701
DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR DATA-EFFICIENT ACTIVE MACHINE LEARNING
2y 5m to grant Granted Jan 13, 2026
Patent 12524658
SPECIAL PURPOSE NEURAL NETWORK TRAINING CHIP
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
84%
With Interview (+3.7%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 706 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month