DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is responsive to the filing on 09/16/2025. Claims 1, 8 and 10 have been amended and/or canceled. Claims 1-20 are pending in this case. Claims 1, 10 and 15 are independent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Stenson et al. (Pub No.: 20230150529 A1), hereinafter referred to as Stenson, in view of Liu et al. (Pub No.: 20190237354 A1), hereinafter referred to as Liu.
With respect to claim 1, Stenson disclose:
A method, comprising: receiving, by a processing device, training data comprising (i) first sensor data indicating a first state of an environment of a < first sensing condition > processing a first substrate at a manufacturing system, (ii) first process tool data comprising a first relative operation life of one or more first processing tools relative to other processing tools of the manufacturing system , and (iii) first process result data corresponding to the first substrate (Based on the examiner's broadest reasonable interpretation (BRI) and the lack of details in the specification, relative operation life is interpreted as describing a performance of the system. In FIG. 3A and paragraph [0054], Stenson discloses the first sensor data and second sensor data collecting data. The input first sensor data captures different conditions (e.g., first sensing condition). This includes scenarios like: different geographical locations, time of day (e.g., daytime vs. nighttime) and atmospheric conditions (e.g., different weather conditions affecting visibility and sensor performance). In paragraphs [0056-0057], Stenson discloses the discriminator model that processes the input data. After processing the input data (e.g., first object from the input sensor data), the model outputs a confidence level indicating how sure it is about its classification, expressed as a percentage. A classification deemed successful or unsuccessful. In paragraph [0062], Stenson discloses between the first object and the modified/second object. Depending on the sensor data type, the difference may be in the form of a pixel value difference for images or a positional data difference and/or intensity value difference for point clouds.)
training, by the processing device, a first model with input data comprising the first sensor data and the first process tool data and target output comprising the process result data, wherein the trained first model is to receive a new input having second sensor data indicating a second state of an environment of a < second sensing condition > processing a second substrate and second process tool data indicating a second time- dependent state of a second processing tool processing the second substrate to produce a second output based on the new input, the second output indicating a second process result data corresponding to the second substrate (In Fig. 3A and paragraph [0054], Stenson disclosed the discriminator model is trained for a second sensing condition different from the first sensing condition. This new sensor data is sent back to the discriminator model as input. The vehicle's discriminator model can identify the changed object from the second sensor data. The generator model changes the sensor data, which helps the discriminator model recognize the changed object, even though it was trained with different sensor information than what it sees now.)
With respect to claim 1, Stenson do not specifically disclose:
A < first sensing condition> as being claim first processing chamber
A <second sensing condition> as being claim second processing chamber
However, Liu is known to disclose:
A < first sensing condition> as being claim first processing chamber ( In paragraph [0076], Liu disclose a first semiconductor processing chamber collecting sensor data.)
A <second sensing condition> as being claim second processing chamber (In paragraph [0076], Liu disclose a second semiconductor processing chamber collecting sensor data)
Stenson and Liu are analogous pieces of art because both references concern the collecting sensor data of an environment. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Stenson, with training data from a collection of sensor data (e.g., images, LIDAR sensor data, RADAR sensor data, etc.) with labeled objects to facilitate training of ML model(s) for object detection and/or sensor data adaptation or augmentation as taught by Stenson, with collecting sensor data characterizing semiconductor processing chambers as taught by Liu. The motivation for doing so would have been to increase efficiency in training and/or adapting simulation data and/or simulation models for new cities and/or new environmental conditions through the use of a closed-loop deep learning system (See[0017] of Stenson.)
Regarding claim 8, Stenson in view of Liu disclose the elements of claim 1. In addition, Stenson disclose:
The method of claim 1, wherein the first relative operation life of the first process tool relative to other process tools of a selection of process tools comprises a value indicating one or more of a number of substrates historically processed by the one or more first processing tools relative to the other processing tools, or a number of substrates processed since a prior preventative maintenance procedure at the one or more first processing tools relative to the other processing tools (Based on the examiner's broadest reasonable interpretation (BRI) and the lack of details in the specification, relative operation life is interpreted as describing a performance of the system. In FIG. 3A and paragraph [0054], Stenson discloses the first sensor data and second sensor data collecting data. The input first sensor data captures different conditions (e.g., first sensing condition). This includes scenarios like: different geographical locations, time of day (e.g., daytime vs. nighttime) and atmospheric conditions (e.g., different weather conditions affecting visibility and sensor performance). In paragraphs [0056-0057], Stenson discloses the discriminator model that processes the input data. After processing the input data (e.g., first object from the input sensor data), the model outputs a confidence level indicating how sure it is about its classification, expressed as a percentage.)
Claim(s) 2-5 are rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of Liu and further in view of Morales et al. (Pub No.: 20210192748 A1), hereinafter referred to as Morales.
Regarding claim 2, Stenson in view of Liu disclose the elements of claim 1. Stenson in view of Liu do not explicitly disclose:
The method of claim 1, wherein training the first model further comprises: processing the first process result data using the first process tool data to generate time-independent process result data
A first regression to be performed using the time-independent process result data and the first sensor data
However, Morales disclose the limitations:
The method of claim 1, wherein training the first model further comprises: processing the first process result data using the first process tool data to generate time-independent process result data (In paragraph [0087], Morales discloses that the machine learning model may include a first network or first set of networks configured to output trajectory templates and/or heat maps representative of predicted object locations at a time in the future (e.g., end of a prediction horizon), a second network or second set of networks configured to output predicted trajectories.)
A first regression to be performed using the time-independent process result data and the first sensor data (In paragraph [0087], Morales discloses that the first network or first set of networks may be trained utilizing one or more clustering algorithms and the second network or second set of networks may be trained utilizing one or more regression algorithms.)
Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teaching of Stenson in view of Liu to modify Morales, with generating predicted trajectories of objects in an environment as taught by Morales. The motivation for doing so would have been improve predictions associated with an object (See [0119] of Morales.)
Regarding claim 3, Stenson in view of Liu and Morales disclose the elements of claim 2. In addition, Morales disclose:
The method of claim 2, wherein training the first model further comprises: determining a residual between the first process result data and the time- independent process result data (In paragraph [0087], Morales discloses that the machine learning model may include a first network or first set of networks configured to output trajectory templates and/or heat maps representative of predicted object locations at a time in the future (e.g., end of a prediction horizon), a second network or second set of networks configured to output predicted trajectories.)
A second regression to be performed using the residual and the first sensor data (In paragraph [0087], Morales discloses that the first network or first set of networks may be trained utilizing one or more clustering algorithms and the second network or second set of networks may be trained utilizing one or more regression algorithms.)
Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having the teaching of Stenson in view of Liu to modify Morales, with generating predicted trajectories of objects in an environment as taught by Morales. The motivation for doing so would have been improve predictions associated with an object (See [0119] of Morales.)
Regarding claim 4, Stenson in view of Liu and Morales disclose the elements of claim 3. In addition, Morales disclose:
The method of claim 3, wherein at least one of the first regression or the second regression is performed using a partial least squares (PLS) algorithm (In paragraph [0077], Morales discloses the machine learning algorithms include Partial Least Squares Regression (PLSR).)
Regarding claim 5, Stenson in view of Liu and Morales disclose the elements of claim 3. In addition, Morales disclose:
The method of claim 3, wherein at least one of the first regression or the second regression is performed as part of a gradient boosting regression (GBR) algorithm (In paragraph [0077], Morales discloses the machine learning algorithms include gradient boosting machines (GBM).)
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of Liu and further in view of Fox et al. (US Patent No.11,128,737 B1), hereinafter referred to as Fox.
Regarding claim 6, Stenson in view of Liu disclose the elements of claim 1. Stenson in view of Liu do not explicitly disclose:
The method of claim 1, wherein training the first model further comprises: causing a first regression to be performed using a first subset of training data to generate a first regression model
A second regression to be performed using a second subset of training data to generate a second regression model
determining a first accuracy of the first regression model is greater than a second accuracy of the second regression model based on a comparison of the first regression model, the second regression model, and the training data
However, Fox disclose the limitation:
The method of claim 1, wherein training the first model further comprises: causing a first regression to be performed using a first subset of training data to generate a first regression model (In Col.8, lines 34-39, Fox discloses that the first data model server 108 may use a gradient-boosting regression model to generate the first artificial intelligence data model.)
A second regression to be performed using a second subset of training data to generate a second regression model (In Col. 10, lines 45-50, Fox discloses the second data model server 110 may generate the second artificial intelligence data model using logistic regression and gradient boosting tree.)
determining a first accuracy of the first regression model is greater than a second accuracy of the second regression model based on a comparison of the first regression model, the second regression model, and the training data (In Col. 29, lines 64-67, Fox discloses finding out whether the accuracy of the first regression model is better than the accuracy of the second regression model by comparing both models and the training data.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of Liu before them, to include Fox’s data modeling to optimize data quality associated with the machine learning model by tuning different parameters as taught by (see(Col. 8, lines 45-49).)
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of Liu and further in view of YOKOMIZO et al. (US Pub No.: 20220285234 A1), hereinafter referred to as YOKOMIZO.
Regarding claim 7, Stenson in view of Liu disclose the elements of claim 1. Stenson in view of Liu do not explicitly disclose:
The method of claim 1, wherein the first process result data comprises a value corresponding to an etch bias of the first substrate
However, YOKOMIZO disclose the limitation (In paragraph [0056], YOKOMIZO discloses that the first substrate comprises a value corresponding to an etch bias).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of Liu before them, to include YOKOMIZO’s wafer bonding to optimize the design for the first alignment diagnostic structures 930 and the second alignment diagnostic structures as taught by YOKOMIZO (see[0092]).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of Liu and further in view of FUKUNAGA et al. (US Pub No.: 20220402087 A1), hereinafter referred to as FUKUNAGA.
Regarding claim 9, Stenson in view of Liu disclose the elements of claim 1. Stenson in view of Liu do not explicitly disclose:
The method of claim 1, wherein the first process result data indicates a first average thickness associated with a central region of the first substrate and a second average thickness associated with an edge region of the first substrate
However, FUKUNAGA disclose the limitation (In paragraph [0018], FUKUNAGA disclose the first substrate in the state that a rear surface of the second substrate is held by a substrate holder. A thickness of a first substrate W becomes large, whereas in a portion where the thickness of the second substrate S is large, the thickness of the first substrate W .)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of Liu before them, to include FUKUNAGA’s substrate processing apparatus to improve flatness of a first substrate in a combined substrate in which the first substrate and a second substrate are bonded to each other as taught by FUKUNAGA (see[0004]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Bramley et al. (Pub No.: 20210223780 A1), hereinafter referred to as Bramley, in view of Stenson et al. (Pub No.: 20230150529 A1), hereinafter referred to as Stenson.
With respect to claim 10, Bramley disclose:
Processing the sensor data and the process tool data using one or more machine- learning models (MLMs) to determine a prediction of a process result measurement of the first substrate (In paragraph [0038], Bramley disclose the machine learning model gives outputs predictions based on combined sensor data. These outputs can show things like types, sizes, locations, and other details related to the data. The outputs of the machine learning model for one or more of the predictions may include confidence value.)
Performing, by the processing device, at least one of a) preparing the prediction for presentation on a graphical user interface (GUI) or b) altering an operation of at least one of the processing chamber or the processing tool based on the prediction (In paragraph [0047], Bramley disclose a visual representation of an instance of the process 100 of FIG. 1. The machine learning model then generate outputs 406 (e.g., pixel by pixel output data with each pixel associated with a value based on the predicted object classification (or empty class), the pixels corresponding to positions or locations with respect to the combined sensor data 106) that may correspond to predictions 410A related to the sensor data 102 and predictions 410B related to the motif(s) 104. )
With respect to claim 10, Bramley do not specifically disclose:
A method, comprising: receiving, by a processing device, (i) sensor data indicating a state of an environment of a processing chamber processing a first substrate according to a substrate processing procedure at a manufacturing system and (ii) process tool data comprising a relative operation life of a processing tool processing the first substrate relative to other process tools of the manufacturing system
However, Stenson is known to disclose:
A method, comprising: receiving, by a processing device, (i) sensor data indicating a state of an environment of a processing chamber processing a first substrate according to a substrate processing procedure at a manufacturing system and (ii) process tool data comprising a relative operation life of a processing tool processing the first substrate relative to other process tools of the manufacturing system (Based on the examiner's broadest reasonable interpretation (BRI) and the lack of details in the specification, relative operation life is interpreted as describing a performance of the system. In FIG. 3A and paragraph [0054], Stenson discloses the first sensor data and second sensor data collecting data. The input first sensor data captures different conditions (e.g., first sensing condition). This includes scenarios like: different geographical locations, time of day (e.g., daytime vs. nighttime) and atmospheric conditions (e.g., different weather conditions affecting visibility and sensor performance). In paragraphs [0056-0057], Stenson discloses the discriminator model that processes the input data. After processing the input data (e.g., first object from the input sensor data), the model outputs a confidence level indicating how sure it is about its classification, expressed as a percentage. A classification deemed successful or unsuccessful. In paragraph [0062], Stenson discloses between the first object and the modified/second object. Depending on the sensor data type, the difference may be in the form of a pixel value difference for images or a positional data difference and/or intensity value difference for point clouds.)
Stenson and Bramley are analogous pieces of art because both references concern a deep neural network (DNN). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Stenson, with training data from a collection of sensor data (e.g., images, LIDAR sensor data, RADAR sensor data, etc.) with labeled objects to facilitate training of ML model(s) for object detection and/or sensor data adaptation or augmentation as taught by Stenson, with determining that the predictions are accurate when, based at least in part on the comparing, the expected predictions are represented in the predictions of the neural network as taught by Bramley. The motivation for doing so would have been to increase efficiency in training and/or adapting simulation data and/or simulation models for new cities and/or new environmental conditions through the use of a closed-loop deep learning system (See[0017] of Stenson.)
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Bramley in view of Stenson and further in view of YOKOMIZO et al. (US Pub No.: 20220285234 A1), hereinafter referred to as YOKOMIZO.
Regarding claim 11, Bramley in view of Stenson disclose the elements of claim 10. Bramley in view of Stenson do not explicitly disclose:
The method of claim 10, wherein the prediction of the process result measurement comprises a value corresponding to an etch bias of the first substrate
However, YOKOMIZO disclose the limitation (In paragraph [0056], YOKOMIZO discloses that the first substrate comprises a value corresponding to an etch bias).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Bramley in view of Stenson before them, to include YOKOMIZO’s wafer bonding to optimize the design for the first alignment diagnostic structures 930 and the second alignment diagnostic structures as taught by YOKOMIZO (see[0092]).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Bramley in view of Stenson and further in view of FUKUNAGA et al. (US Pub No.: 20220402087 A1), hereinafter referred to as FUKUNAGA.
Regarding claim 12, Bramley in view of Stenson disclose the elements of claim 10. Bramley in view of Stenson do not explicitly disclose:
The method of claim 10, wherein the prediction of the process result measurement comprises indicates a first average thickness associated with a central region of the first substrate and a second average thickness associated with an edge region of the first substrate
However, FUKUNAGA disclose the limitation (In paragraph [0018], FUKUNAGA disclose the first substrate in the state that a rear surface of the second substrate is held by a substrate holder. A thickness of a first substrate W becomes large, whereas in a portion where the thickness of the second substrate S is large, the thickness of the first substrate W .)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of Liu before them, to include FUKUNAGA’s substrate processing apparatus to improve flatness of a first substrate in a combined substrate in which the first substrate and a second substrate are bonded to each other as taught by FUKUNAGA (see[0004]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Bramley in view of Stenson and further in view of EL-Shaer et al. (US Pub No.: 20230219561 A1), hereinafter referred to as EL-Shaer.
Regarding claim 13, Bramley in view of Stenson disclose the elements of claim 10. Bramley in view of Stenson do not explicitly disclose:
The method of claim 10, wherein processing the sensor data and the process tool data further comprises processing the sensor data using the process tool data to generate modified sensor data, wherein the modified sensor data comprises sensor data weighted according to the process tool data, wherein the prediction is determined based on the modified sensor data
However, EL-Shaer disclose the limitation (In paragraph [0076], EL-Shaer discloses that the signal processing system 502 may obtain the sensor data 506 from a different component other than the sensor 504. Further, one or more sensors 504 and/or a different component can perform preliminary signal processing to modify the sensor data 506 prior to the signal processing system 502 obtaining the sensor data 506).
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Bramley in view of Stenson before them, to include EL-Shaer’s state estimation to improve the accuracy of the vehicle state variables outputted to one or more other systems in the environment (e.g., environment 100) as processed sensor data 510 as taught by EL-Shaer (see[0080]).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Bramley in view of Stenson and further in view of ARMITAGE et al. (US Pub No.: 20220254492 A1), hereinafter referred to as ARMITAGE.
Regarding claim 14, Bramley in view of Stenson disclose the elements of claim 10. Bramley in view of Stenson do not explicitly disclose:
The method of claim 10, wherein processing the sensor data and the process tool data further comprises: processing, using a first MLM of the one or more MLMs, the sensor data to obtain a first process result prediction
processing, using a second MLM of the one or more MLMs, the first process result prediction to obtain a second process result prediction
determining the prediction based on a combination of at least the first process result prediction and the second process result prediction
However, ARMITAGE disclose the limitation:
The method of claim 10, wherein processing the sensor data and the process tool data further comprises: processing, using a first MLM of the one or more MLMs, the sensor data to obtain a first process result prediction (In paragraph [0143], ARMITAGE discloses the training sensor dataset used for training the first set of ML model(s).)
processing, using a second MLM of the one or more MLMs, the first process result prediction to obtain a second process result prediction (In paragraph [0143], ARMITAGE discloses the second set of ML model(s), corresponding to estimates or calculations of the second sensor.)
determining the prediction based on a combination of at least the first process result prediction and the second process result prediction (In paragraph [0143], ARMITAGE discloses calculating/determining first and second process results.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Bramley in view of Stenson before them, to include ARMITAGE’s automated detection to estimate the one or more clinical biomarker(s) of the subject based on the extracted and classified segments of sensor data as taught by ARMITAGE (see[0001].)
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson et al. (Pub No.: 20230150529 A1), hereinafter referred to as Stenson, in view of KETYKO et al. (Pub No.: 20210026453 A1), hereinafter referred to as KETYKO and further in view of Bramley et al. (Pub No.: 20210223780 A1), hereinafter referred to as Bramley.
With respect to claim 15, Stenson disclose:
A method, comprising: training a machine learning model (MLM) comprising: receiving training data comprising (i) first sensor data indicating a first state of an environment of a first process chamber processing a first substrate and (ii) metrology data comprising process result measurements and location data indicating first locations across a surface of the first substrate corresponding to the process result measurements (Based on the examiner's broadest reasonable interpretation (BRI) and the lack of details in the specification, relative operation life is interpreted as describing a performance of the system. In FIG. 3A and paragraph [0054], Stenson discloses the first sensor data and second sensor data collecting data. The input first sensor data captures different conditions (e.g., first sensing condition). This includes scenarios like: different geographical locations, time of day (e.g., daytime vs. nighttime) and atmospheric conditions (e.g., different weather conditions affecting visibility and sensor performance). In paragraphs [0056-0057], Stenson discloses the discriminator model that processes the input data. After processing the input data (e.g., first object from the input sensor data), the model outputs a confidence level indicating how sure it is about its classification, expressed as a percentage. A classification deemed successful or unsuccessful. In paragraph [0062], Stenson discloses between the first object and the modified/second object. Depending on the sensor data type, the difference may be in the form of a pixel value difference for images or a positional data difference and/or intensity value difference for point clouds.)
With respect to claim 15, Stenson do not specifically disclose:
encoding the training data to generate encoded training data
causing a regression to be performed using the encoded training data
However, KETYKO is known to disclose:
Encoding the training data to generate encoded training data (In paragraph [0081], KETYKO disclose the machine learning model includes a data adaptation part configured for determining a modified sensor data item based on an input sensor data item, an encoder configured for determining encoded features based on the modified sensor data item, a decoder configured for determining a decoded sensor data item based on the encoded features, representing an estimation of the input sensor data item)
Stenson and KETYKO are analogous pieces of art because both references concern training, based on the plurality of sensor data items. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Stenson, with training data from a collection of sensor data (e.g., images, LIDAR sensor data, RADAR sensor data, etc.) with labeled objects to facilitate training of ML model(s) for object detection and/or sensor data adaptation or augmentation as taught by Stenson, with a method and an apparatus for processing sensor data with a machine learning classifier as taught by KETYKO. The motivation for doing so would have been to increase efficiency in training and/or adapting simulation data and/or simulation models for new cities and/or new environmental conditions through the use of a closed-loop deep learning system (See[0017] of Stenson.)
With respect to claim 15, Stenson in view of KETYKO do not specifically disclose:
A regression to be performed using the encoded training data
However, Bramley is known to disclose:
A regression to be performed using the encoded training data (In paragraph [0035], Bramley disclose the machine learning model to be trained for a downstream task (e.g., regression) using training sensor data that is different from the combined sensor data used during inference)
Stenson and KETYKO are analogous pieces of art because both references concern training, based on the plurality of sensor data items. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Bramley, with determining that the predictions are accurate when, based at least in part on the comparing, the expected predictions are represented in the predictions of the neural network as taught by Bramley. The motivation for doing so would have been to increase efficiency in training and/or adapting simulation data and/or simulation models for new cities and/or new environmental conditions through the use of a closed-loop deep learning system (See[0017] of Stenson.)
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of KETYKO, Bramley and further in view of Hester et al. (US Patent No.11,921,824 B1), hereinafter referred to as Hester.
Regarding claim 16, Stenson in view of KETYKO and Bramley disclose the elements of claim 15. Stenson in view of KETYKO and Bramley do not explicitly disclose:
The method of claim 15, further comprising: receiving second sensor data indicating a second state of an environment of a second process chamber processing a second substrate
encoding the second sensor data to generate encoded sensor data
using the encoded sensor data as input to the trained MLM
receiving one or more outputs from the trained MLM, the one or more outputs comprising encoded prediction data
and decoding the encoded prediction data to generate prediction data comprising values indicating process results of the second substrate in second locations across a surface of the second substrate, the second locations corresponding to the first locations of the first substrate
However, Hester disclose the limitation:
The method of claim 15, further comprising: receiving second sensor data indicating a second state of an environment of a second process chamber processing a second substrate (In Col. 11, lines 39-47, Hester disclose receiving second sensor data from a second sensor for fusing sensor data of different modalities using a transformer encoder.)
encoding the second sensor data to generate encoded sensor data ( In Col. 11, lines 39-47, Hester disclose the transformer encoder may be used to generate a second modified plurality of tokens representing the point cloud lidar sensor data .)
using the encoded sensor data as input to the trained MLM (In Col. 2, lines 27-29, Hester disclose transformer models are machine learning models that include an encoder network and a decoder network.)
receiving one or more outputs from the trained MLM, the one or more outputs comprising encoded prediction data (In Col. 2, lines 27-34, Hester disclose transformer models are machine learning models that include an encoder network and a decoder network. The encoder takes an input and generates feature representations (e.g., feature vectors, feature maps, etc.) from the input. The feature representation is then fed into a decoder that may generate an output based on the encodings.)
and decoding the encoded prediction data to generate prediction data comprising values indicating process results of the second substrate in second locations across a surface of the second substrate, the second locations corresponding to the first locations of the first substrate (In Col. 11, lines 48-54, Hester disclose processing may continue at action 418, at which a computer vision (CV) operation may be performed using the first modified plurality of tokens and the second modified plurality of tokens. The particular computer vision operation and/or other predictive task may be dependent on the tasks for which the particular transformer decoder and the prediction heads have been trained.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of KETYKO and Bramley before them, to include Hester’s sensor data to improve the machine learning models over time by retraining the models as more and more data becomes available as taught by Hester (see(Col. 1, lines 65-67)).
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of KETYKO, Bramley, Hester and further in view of DUTTA et al. (US Pub No.: 20200034511 A1), hereinafter referred to as DUTTA.
Regarding claim 17, Stenson in view of KETYKO, Bramley and Hester disclose the elements of claim 16. Stenson in view of KETYKO, Bramley and Hester do not explicitly disclose:
The method of claim 16, wherein at least one of encoding the sensor data or decoding the encoded prediction data is performed using principal component analysis (PCA).
However, DUTTA disclose the limitation (In paragraph [0037], Dutta discloses to encoding of process information 205, such as the process event data 206 and the data collected by sensors 207, into a feature space, i.e., perform feature extraction of the process information 205. The encoding can be through various machine learning methods such as Auto-Encoder (AE), Principal Component Analysis (PCA),)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of KETYKO, Bramley and Hester before them, to include DUTTA’s system for wafers to increase or decrease process event data in pressure inside a chamber (process equipment) as taught by DUTTA (see[0028]).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of KETYKO, Bramley, Hester, DUTTA and further in view of FUKUNAGA et al. (US Pub No.: 20220402087 A1), hereinafter referred to as FUKUNAGA.
Regarding claim 18, Stenson in view of KETYKO, Bramley, Hester and DUTTA disclose the elements of claim 16. Stenson in view of KETYKO, Bramley, Hester and DUTTA do not explicitly disclose:
The method of claim 18, wherein the predication data indicates a first average thickness associated with a central region of the second substrate and a second average thickness associated with an edge region of the second substrate
However, FUKUNAGA disclose the limitation (In paragraph [0018], FUKUNAGA disclose the first substrate in the state that a rear surface of the second substrate is held by a substrate holder. A thickness of a first substrate W becomes large, whereas in a portion where the thickness of the second substrate S is large, the thickness of the first substrate W .)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of KETYKO, Bramley, Hester and DUTTA before them to include FUKUNAGA’s substrate processing apparatus to improve flatness of a first substrate in a combined substrate in which the first substrate and a second substrate are bonded to each other as taught by FUKUNAGA (see[0004]).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of KETYKO, Bramley and further in view of YOKOMIZO et al. (US Pub No.: 20220285234 A1), hereinafter referred to as YOKOMIZO.
Regarding claim 19, Stenson in view of KETYKO and Bramley disclose the elements of claim 15. Stenson in view of KETYKO and Bramley do not explicitly disclose:
The method of claim 15, wherein the process result measurements comprise a value indicating an etch bias of the first substrate
However, YOKOMIZO disclose the limitation (In paragraph [0056], YOKOMIZO discloses that the first substrate comprises a value corresponding to an etch bias.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of KETYKO and Bramley before them, to include YOKOMIZO’s wafer bonding to optimize the design for the first alignment diagnostic structures 930 and the second alignment diagnostic structures as taught by YOKOMIZO (see[0092]).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Stenson in view of KETYKO et al. (Pub No.: 20210026453 A1), hereinafter referred to as KETYKO, Bramley and further in view of Fox et al. (US Patent No.11,128,737 B1), hereinafter referred to as Fox.
Regarding claim 20, Stenson in view of KETYKO and Bramley disclose the elements of claim 15. Stenson in view of KETYKO and Bramley do not explicitly disclose:
The method of claim 15, wherein the regression is performed as a part of a gradient boosting regression (GBR)
However, Fox disclose the limitation (In Col.8, lines 34-39, Fox discloses that the first data model server 108 may use a gradient-boosting regression model to generate the first artificial intelligence data model.)
Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention having the teachings of Stenson in view of KETYKO and Bramley before them to include Fox’s data modeling to optimize data quality associated with the machine learning model by tuning different parameters as taught by (see(Col. 8, lines 45-49).)
Response to Arguments
Applicant's arguments filed on 09/16/2025 have been fully considered, and in part are persuasive.
Pertaining to Rejection under 101
Rejections for claims 1-20 are withdrawn under 35 USC § 101
Pertaining to Rejection under 103
Applicant’s arguments in regard to the examiner’s rejections under 35 USC 103 are moot in view of the new grounds of rejection
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVEL HONORE whose telephone number is (703)756-1179. The examiner can normally be reached Monday-Friday 8 a.m. -5:30 p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
EVEL HONORE
Examiner
Art Unit 2142
/Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142