DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/14/2024 and 10/24/2025 are being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-6, 8, 10-13, 15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Xiang (US 2025/0336053) in view of Bastani (US 2023/0316103).
As per claims 1 and 15, Darveshi a method and a non-transitory machine-readable storage medium comprising: identifying a plurality of substrate images that have been sorted into a plurality of classes (Xiang, ¶[0038] “After the classification of substrate images based on extracted features, in block 206 of the method 200 clusters of the defect images are classified as misclassified substrate.” There are multiple classes here); training a machine learning model using data input comprising the plurality of substrate images and target output comprising the plurality of classes (Xiang, ¶[0032] “The machine learning model in block 60 is trained on the processed data to learn patterns and features associated with different types of substrate defects. In block 62, the inspection system uses supervised learning algorithms to train one or more machine learning models to recognize different types of defects based on their features.” The different defects and good substrates represent different classes); and refining the trained machine learning model using a triplet loss function (Xiang, ¶[0042] “From a given set of training data points with known class 402, the distance between the unknown data point and each of the training data points is calculated using a distance metric, such as Euclidean distance or Manhattan distance 404.” Distance metric represents triplet loss function, as applicant is not providing what type of triplet loss function they are using and a distance metric is a specific type of this, however for clarity below there is a secondary reference to clarify, in other words a triplet loss function can be a distance metric as evidence by more prior art below) based on one or more substrate images misclassified by the trained machine learning model to provide a refined trained machine learning model associated with performance of an action associated with substrate processing (Xiang, ¶[0033] “This can happen for example because a first machine learning model does not classify certain defects well, but a second specified machine learning model of a different type does correctly classify the misclassified machine learning model.” This represents the misclassifying, and the second machine learning model represents the refined trained learning model).
Xiang doesn’t teach specifically using a triplet loss function.
However, Bastani teaches, using a triplet loss function (Bastani, ¶ [0099] “In a triplet embedding (for triplet constraints), the model may learn a distance metric on a low dimensional space and it is guaranteed that the distance on the points in the low dimensional space is close reflects the given triplet constraint with high probability. [0100] A further approach for triplet constraints comprises using neural networks to optimize a loss function called “triplet-loss” that summarizes the constraints in a mathematical formula.” This represents using a triple loss function and tying it to distance metric which is what Xiang teaches).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Xiang with Bastani’s optimize a loss function called “triplet-loss” to replace the distance metric of Xiang.
The motivation would have been to optimize a loss function as taught by Bastani ¶ [0100].
As per claims 3, 10 and 17 Xiang in view of Bastani, the method of claim 1, wherein the performance of the action comprises providing current substrate images to the refined trained machine learning model to select an algorithm for generation of metrology data (Bastani, ¶[0063] “An inspection apparatus, which may also be referred to as a metrology apparatus, is used to determine properties of the substrates W, and in particular, how properties of different substrates W vary or how properties associated with different layers of the same substrate W vary from layer to layer” This represents providing current substrate images to the refined trained machine learning model to select an algorithm for generation of metrology data, as it is providing images to the inspection apparatus).
As per claims 4, 11 and 18 Xiang in view of Bastani teaches, the method of claim 1 further comprising: training a base model based on a plurality of historical substrate images sorted into a plurality of historical classes (Xiang, ¶[0056] “The machine 600 (e.g., computer system) may include a hardware-based processor 601 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 603 and a static memory 605, some or all of which may communicate with each other via an interlink 630 (e.g., a bus).” This represents images sorted into a plurality of historical classes ); and sorting, based on image encodings of the trained base model, the historical substrate images into a plurality of clusters, wherein the plurality of substrate images comprise clustered substrate images from each of the plurality of clusters ( Xiang, ¶[0057] The instructions 624 may also reside, completely or at least partially, within a main memory 603, within a static memory 605, within a mass storage device 607, or within the hardware-based processor 601 during execution thereof by the machine 600. In an example, one or any combination of the hardware-based processor 601, the main memory 603, the static memory 605, or the storage device 620 may constitute machine readable media.” And “[0059] The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 600 and that cause the machine 600 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions.” Represents encoding).
As per claims 5, 12 and 19 Xiang in view of Bastani teaches, the method of claim 1, wherein the training of the machine learning model comprises using a negative log-likelihood loss function (Bastani, ¶[0100] “ optimize a loss function called “triplet-loss” that summarizes the constraints in a mathematical formula.” This represents a negative log-like hood loss function, as at times this function might go negative and contain log- likelihood loss function).
As per claims 6, 13 and 20 Xiang in view of Bastani teaches, the method of claim 1, wherein the training of the machine learning model comprises using few-shot learning by using up to a threshold amount of substrate images in each class of the plurality of classes (Xiang, ¶[0028] “The image processing is done for defect feature extraction in step 14 which involves image processing techniques like thresholding” this represents a threshold).
As per claim 8, Xiang teaches, a method comprising: identifying current substrate images associated with substrate processing ((Xiang, ¶[0038] “After the classification of substrate images based on extracted features, in block 206 of the method 200 clusters of the defect images are classified as misclassified substrate.” This represents substrate processing); providing the current substrate images as input to a refined trained machine learning model, the refined trained machine learning model having been trained based on a plurality of substrate images that have been sorted into a plurality of classes (Xiang, ¶[0033] “This can happen for example because a first machine learning model does not classify certain defects well, but a second specified machine learning model of a different type does correctly classify the misclassified machine learning model.” This represents the misclassifying, and the second machine learning model represents the refined trained learning model ) and having been refined using a triplet loss function based on one or more substrate images misclassified by the trained machine learning model (Xiang, ¶[0042] “From a given set of training data points with known class 402, the distance between the unknown data point and each of the training data points is calculated using a distance metric, such as Euclidean distance or Manhattan distance 404.” Distance metric represents triplet loss function, as applicant is not providing what type of triplet loss function they are using and a distance metric is a specific type of this, however for clarity below there is a secondary reference to clarify, in other words a triplet loss function can be a distance metric as evidence by more prior art below); obtaining, from the refined trained machine learning model, output associated with predictive data; and causing, based on the predictive data, performance of an action associated with the substrate processing (Xiang, ¶[0042] “Alternatively, a weighted vote is used, where the distances of the K neighbors are used as weights to determine the class. Output the predicted class of the unknown data point.” predictive data being represented by predicted class).
Xiang doesn’t teach specifically using a triplet loss function.
However, Bastani teaches, using a triplet loss function (Bastani, ¶ [0099] “In a triplet embedding (for triplet constraints), the model may learn a distance metric on a low dimensional space and it is guaranteed that the distance on the points in the low dimensional space is close reflects the given triplet constraint with high probability. [0100] A further approach for triplet constraints comprises using neural networks to optimize a loss function called “triplet-loss” that summarizes the constraints in a mathematical formula.” This represents using a triple loss function and tying it to distance metric which is what Xiang teaches).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of Xiang with Bastani’s optimize a loss function called “triplet-loss” to replace the distance metric of Xiang.
The motivation would have been to optimize a loss function as taught by Bastani ¶ [0100].
Allowable Subject Matter
Claim 2, 7, 9, 14 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The subject matter in these claims were not found in the prior art.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANTIAGO GARCIA whose telephone number is (571)270-5182. The examiner can normally be reached Monday-Friday 9:30am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SANTIAGO GARCIA/Primary Examiner, Art Unit 2673
/SG/