Prosecution Insights
Last updated: April 19, 2026
Application No. 17/376,664

INTERACTIVE AND ITERATIVE TRAINING OF A CLASSIFICATION ALGORITHM FOR CLASSIFYING ANOMALIES IN IMAGING DATASETS

Non-Final OA §103§112
Filed
Jul 15, 2021
Examiner
WU, NICHOLAS S
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Carl Zeiss Smt GmbH
OA Round
3 (Non-Final)
47%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
90%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
18 granted / 38 resolved
-7.6% vs TC avg
Strong +43% interview lift
Without
With
+43.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
44 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
26.7%
-13.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 38 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 07/31/2025 has been entered. Response to Arguments Applicant's arguments filed 07/31/2025 have been fully considered but they are not fully persuasive. Regarding the 101 rejections, applicant’s arguments and amendments to the independent claims are persuasive and overcome the previous 101 rejections. Specifically, applicant’s amended limitations “determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies; and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die” provides a technical improvement because the anomalies are used to adjust an operation of a production line to prevent downstream failures. See pg. 13 of “Remarks”: “In the present application, the claims recite additional elements that apply or use the alleged judicial exception in a meaningful way. For example, the independent claims 1, 27, and 28 recite a specific improvement over prior art systems at least by "adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die." As described in the preceding paragraphs, the specific improvement of the additional elements also allow for machine operations for enhancing wafer manufacturing operations.” Applicant’s amendments and corresponding arguments that the claimed invention provides a technical improvement to the field of wafer manufacturing are persuasive. Therefore, the 101 rejections are withdrawn. Regarding the 103 rejections, applicant's arguments filed with respect to the prior art rejections have been fully considered but they are moot. Applicant has amended the claims to recite new combinations of limitations. Applicant's arguments are directed at the amendment. Please see below for new grounds of rejection, necessitated by Amendment. Claim Objections Claims 6, 8, and 10 are objected to because of the following informalities: Claim 6 recites “The computer-implemented method of any one of claim 1”. The claim should read “The computer-implemented method of claim 1” for consistency with the rest of the claims. Appropriate correction is required. Claim 8 recites “wherein the the plurality of anomalies”. The repeated “the” should be deleted. Appropriate correction is required. Claim 10 recites “The computer-implemented method of any one of claim 1”. The claim should read “The computer-implemented method of claim 1” for consistency with the rest of the claims. Appropriate correction is required. Claim Rejections - 35 USC § 112: New Matter The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-29 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 1, the claim recites the limitations “and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die.” These limitations are considered new matter because the original disclosure does not appear to provide support for adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die. The closest support from the original disclosure appears to be directed to detecting the defects using a counter (see Specification, pg. 6-7, “One example use cases the Process Window Qualification: here, dies on a wafer are produced with varying production parameters, e.g., exposure time, focus variation, etc. Optimized production parameters can be identified based on a distribution of the defects across different regions of the wafer, e.g., across different dies of the wafer. This is only one example use case. Other use cases include, e.g., end of line testing.” and Specification, pg. 17-18, “During the production phase, the trained ML classification algorithm can be used for inference. The manual user interaction during the training phase should be limited. The manual user interaction during the production phase can be further reduced if compared to the training phase. For instance, during the production phase, inference using the trained ML classification algorithm can be used to determine, e.g., a defect count per die and per class. Process monitoring can be implemented, e.g., tracking such defect count.”). However, there is not a mention of the adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die. The courts have determined that the introduction of claim changes which involve narrowing the claims by introducing elements or limitations which are not supported by the as-filed disclosure is a violation of the written description requirement of 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph (MPEP 2163.05 Section II). The noted limitations are therefore considered new matter. Regarding claims 2-26, the claims are rejected for at least their dependence on claim 1. Regarding claims 27-29, the claims are similar to claim 1 and rejected under the same rationale. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 5, the claim recites the limitation selecting the plurality of anomalies to have a low similarity measure with respect to the one or more further anomalies. There is insufficient antecedent basis for this limitation in the claim because the term “the one or more further anomalies” lacks antecedent basis. For the purposes of examination, “the one or more further anomalies” is interpreted as additional anomalies. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8-10, 16-18, 21-23, and 25-29 are rejected under 35 U.S.C. 103 as being unpatentable over Auerbach, US Pre-Grant Publication 2006/0115143A1 (“Auerbach”) in view of Lu, et al., US Pre-Grant Publication 2019/0287230A1 (“Lu”) and further in view of Tanaka, et al., US Pre-Grant Publication 2003/0138978A1 (“Tanaka”). Regarding claim 1, Auerbach discloses: A computer-implemented method, (Auerbach, ⁋24, “Apparatus 10 further comprises processing system 60 and a computing system 66 [A computer-implemented method,] including a monitor 64.”). comprising: receiving, at a processing device, an imaging dataset of wafers, each image of the imaging dataset comprising a plurality of tiles, the wafers comprising a plurality of semiconductor structures; (Auerbach, ⁋11, “a wafer inspection system generates a set of defects of the wafer, and the system performs an initial filtration of the defects to generate candidate defects for further analysis. The inspection system generates values of image attributes for each of the candidate defects [each image of the imaging dataset comprising a plurality of tiles, the wafers comprising a plurality of semiconductor structures;], the image attributes typically comprising expressions that are functions of measurements of the candidate defects made by the inspection system.” and Auerbach, ⁋26, “In the production phase, surfaces of other wafers are inspected, these wafers being in substantially the same phase of fabrication as already-inspected wafer 26 [comprising: receiving, at a processing device, an imaging dataset of wafers,].”). determining, by the processing device, using an anomaly detection algorithm a plurality of anomalies in the imaging dataset,… (Auerbach, ⁋11, “a wafer inspection system generates a set of defects of the wafer, and the system performs an initial filtration of the defects to generate candidate defects for further analysis [determining, by the processing device, using an anomaly detection algorithm a plurality of anomalies in the imaging dataset,…].”). training, by the processing device, a machine-learning classification algorithm to generate a classification of the plurality of anomalies in the imaging dataset, (Auerbach, ⁋26, “Apparatus 10 is typically operated in an initial “learning” phase, and subsequently in a production phase. In the learning phase, surface 25 of wafer 26 is inspected, and a human operator 90 of apparatus 10 interacts with the apparatus, using results generated by the apparatus, to iteratively decide values of the results that have a high probability of being generated by yield limiting defects. [training, by the processing device, a machine-learning classification algorithm to generate a classification of the plurality of anomalies in the imaging dataset,]”). wherein the classification of the plurality of anomalies comprises assigning each of the plurality of anomalies to a corresponding class of a current set of classes into which the anomalies of the plurality of anomalies are binned;… (Auerbach, ⁋28 and Figure 3A, “In a second step 204, typically performed in post-processor 62 after surface 25 has been completely scanned, the alarms are grouped into clusters, and the clusters are classified as defects. Typically one cluster represents one defect. Post-processor 62 performs further filtering of each of the defects according to the properties of each defect, where the properties comprise the number of alarms, also herein termed the volume, in the cluster, and its grade, which is a function of individual alarm grades of each alarm.; clustering and filtering the defects into groups is interpreted as binning anomalies to classes and Figures 3A/3B show bins of different anomaly classes (i.e. wherein the classification of the plurality of anomalies comprises assigning each of the plurality of anomalies to a corresponding class of a current set of classes into which the anomalies of the plurality of anomalies are binned;…)”). While Auerbach teaches detecting anomalies on a wafer using a classification algorithm, Auerbach does not explicitly teach: …the anomaly detection algorithm comparing each tile of each image of the imaging dataset to a reconstructed representation of a respective input tile, wherein each tile of each image of the imaging dataset comprises a plurality of pixels in each direction covering a spatial context of a potential anomaly;… …determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies; and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die. Lu teaches: …the anomaly detection algorithm comparing each tile of each image of the imaging dataset to a reconstructed representation of a respective input tile, (Lu, ⁋45, “The model is applied at 103 using a processor to find one or more anomalies in image patches. The model can generate reconstruction errors and/or probabilities. The model can predict whether a patch is abnormal by examining the patch level reconstruction error and/or probabilities […the anomaly detection algorithm comparing each tile of each image of the imaging dataset to a reconstructed representation of a respective input tile,].”). wherein each tile of each image of the imaging dataset comprises a plurality of pixels in each direction covering a spatial context of a potential anomaly;… (Lu, ⁋19, “The one or more anomalies can each be one of an anomaly patch or an anomaly region; a patch or region is interpreted as tile as a region/patch of an image comprises multiple pixels in each direction with a spatial context (i.e. wherein each tile of each image of the imaging dataset comprises a plurality of pixels in each direction covering a spatial context of a potential anomaly;…)”). Auerbach and Lu are both in the same field of endeavor (i.e. anomaly detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach and Lu to teach the above limitation(s). The motivation for doing so is that autoencoders improve the robustness and accuracy of anomaly detection (cf. Lu, ⁋55, “An advantage of a variational autoencoder is its latent variables are stochastic variables. Sometimes the nominal and defect data can share the same mean, but their deviations can be different. The variational autoencoder takes into account the distribution difference between an original input and reconstructed data, which improves accuracy and robustness.”). While Auerbach in view of Lu teaches anomaly detection using a machine learning model, the combination does not explicitly teach: …determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies; and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die. Tanaka teaches: …determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies; (Tanaka, ⁋63, “For each wafer (which may be each sampled wafer) flowing on the fabrication line, the inspection results collection and analysis unit 30 receives defects distribution or defects variation data 31 per defect type inspected by the inspection equipment or review station; a defect wafer is interpreted as having a defect die (i.e. …determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies;)”). and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die. (Tanaka, ⁋65, “The fault monitoring unit 36 conveys the fault information received from the inspection results collection and analysis unit 30 to a fabrication line administrator 37 via display and annunciator means 38. On the other hand, the fabrication line management station 34 exerts control such as stopping the on-going fabrication process by the faulty manufacturing equipment and switches the operation to another manufacturing equipment that operates normally, based on the fault information received from the inspection results collection and analysis unit 30 [and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die.].”). Auerbach, in view of Lu, and Tanaka are both in the same field of endeavor (i.e. anomaly detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach, in view of Lu, and Tanaka to teach the above limitation(s). The motivation for doing so is that adjusting the production line when an anomaly is detected prevents further downstream damage that the anomaly could cause (cf. Tanaka, see ⁋3-5). Regarding claim 8, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Auerbach further teaches wherein the the plurality of anomalies are being binned into a predefined class of the set of classes. (Auerbach, ⁋26, “Apparatus 10 is typically operated in an initial “learning” phase, and subsequently in a production phase. In the learning phase, surface 25 of wafer 26 is inspected, and a human operator 90 of apparatus 10 interacts with the apparatus, using results generated by the apparatus, to iteratively decide values of the results that have a high probability of being generated by yield limiting defects.; the mention of a learning phase is interpreted as having a predefined set of classes (i.e. wherein the the plurality of anomalies are being binned into a predefined class of the set of classes.)”). Regarding claim 9, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Auerbach further teaches further comprising: determining a population of a class of the current set of classes into which the plurality of anomalies are binned. (Auerbach, ⁋33 and Figures 3A/3B, “The inventor has found that, in attribute space, there are one or more sections of any attribute histogram in which there is a preponderance of the nuisances. For histogram 250, for example, the one or more sections may comprise all or part of a central region 253, and/or all or part of a region outside the central region. For histogram 240, the one or more sections may comprise regions around values 242 and 244. Outside these one or more sections, there is a preponderance of true defects.; using the histograms and comparing the peaks, or the populations, of attributes to determine true defects is interpreted as determining a population of the current class (i.e. further comprising: determining a population of a class of the current set of classes into which the plurality of anomalies are binned.)”). Regarding claim 10, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of any one of claim 1. Lu further teaches wherein the spatial context of the potential anomaly comprises a position of a semiconductor structure type. (Lu, ⁋45, “The model is applied at 103 using a processor to find one or more anomalies in image patches; image patches are interpreted as tiles and a tile is interpreted as a position on an image (i.e. wherein the spatial context of the potential anomaly comprises a position). The model can generate reconstruction errors and/or probabilities. The model can predict whether a patch is abnormal by examining the patch level reconstruction error and/or probabilities.” and Lu, ⁋42, “In an instance, the training set includes images of semiconductor structures, dies, or parts of a semiconductor wafer surface [of a semiconductor structure type.].”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Lu with the teachings of Auerbach and Tanaka for the same reasons disclosed in claim 1. Regarding claim 16, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Auerbach further teaches further comprising applying at least one abort criterion, wherein the abort criterion is selected from a group consisting of a user input, a number of classes for which anomalies have been presented to the user, a population of classes in the current set of classes, a probability of finding a new class not yet included in the set of classes, a worst classification confidence of all un-annotated anomalies, and an aggregated count of anomalies selected for presentation to the user or annotated by the user reaching a threshold. (Auerbach, ⁋87-91, “In a decision step 322, post-processor checks to see that the number of false alarms outside the bounding box is sufficiently small, using expression (10): 𝑁FA⁡(𝑇)<max(𝑁FA(max),FAR(recipe)·(𝑁𝑇⁡(𝑇)+𝑁FA⁡(𝑇))(10) where NFA (max) is a number, ≧0, which ensures that flowchart 300 gives satisfactory results when small numbers of defects are reviewed [further comprising applying at least one abort criterion, wherein the abort criterion is selected from a group consisting of… and an aggregated count of anomalies selected for presentation to the user or annotated by the user reaching a threshold]. A typical value of NFA (max) is 4. FAR(recipe) is the prescribed false alarm rate that is to be produced by flowchart 300. If expression (10) is not satisfied, steps 316, 318, and 320 of the multidimensional stage are reiterated.”). Regarding claim 17, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Auerbach further teaches wherein the plurality of anomalies are concurrently displayed on a user interface configured to batch annotate the plurality of anomalies. (Auerbach, ⁋61, “In a fifth step 310, the defects selected for review in step 308 are displayed on monitor 64 for review by operator 90 [wherein the plurality of anomalies are concurrently displayed on a user interface]. The operator classifies the displayed defects as true, nuisance, or unknown [configured to batch annotate the plurality of anomalies.], and post-processor 62 uses the classification in subsequent steps of flowchart 300.”). Regarding claim 18, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 17. Auerbach further teaches wherein batch annotation of the plurality of anomalies comprises batch assigning of a plurality of labels to the plurality of anomalies. (Auerbach, ⁋61, “In a fifth step 310, the defects selected for review in step 308 are displayed on monitor 64 for review by operator 90. The operator classifies the displayed defects; classifying a plurality of defects in one step is interpreted as batch annotating multiple anomalies concurrently (i.e. wherein batch annotation of the plurality of anomalies comprises batch assigning of a plurality of labels to the plurality of anomalies.) as true, nuisance, or unknown, and post-processor 62 uses the classification in subsequent steps of flowchart 300.”). Regarding claim 21, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Auerbach further teaches further comprising detecting at least some of plurality of anomalies using a die-to-die and/or die-to-database registration. (Auerbach, ⁋25, “Apparatus 10 inspects surface 25 in order to locate defects on or close to the surface. By way of example, the inspection process is assumed to comprise a die-to-die [further comprising detecting at least some of plurality of anomalies using a die-to-die and/or die-to-database registration.] comparison, although other inspection processes known in the art, such as a wafer-to-wafer comparison and/or comparison with results from a database, may also be used.”). Regarding claim 22, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Lu further teaches wherein the tiles of the imaging data comprise the anomalies and a surrounding of the anomalies. (Lu, ⁋71, “input and reconstructed SEM patches with an autoencoder. Defective regions are not reconstructed because the training set only contains background patches. Thus, anomaly patches will have higher reconstruction errors; background patches are interpreted as the surrounding of the anomalies as these regions are not anomalous (i.e. wherein the tiles of the imaging data comprise the anomalies and a surrounding of the anomalies.).”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Lu with the teachings of Auerbach and Tanaka for the same reasons disclosed in claim 1. Regarding claim 23, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Auerbach further teaches wherein the current set of classes comprises at least one defect class and at least one nuisance class. (Auerbach, ⁋61, “In a fifth step 310, the defects selected for review in step 308 are displayed on monitor 64 for review by operator 90. The operator classifies the displayed defects as true, nuisance, or unknown [wherein the current set of classes comprises at least one defect class and at least one nuisance class.], and post-processor 62 uses the classification in subsequent steps of flowchart 300.”). Regarding claim 25, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Lu further teaches wherein the imaging dataset is a multibeam scanning electron microscopic (SEM) image acquired by a multibeam scanning electron microscope configured to record the imaging dataset in a plurality of fields of view. (Lu, ⁋82, “it is to be understood that the electron beam may be directed to and scattered from the wafer 204 at any suitable angles. In addition, the electron beam-based output acquisition subsystem may be configured to use multiple modes to generate images [wherein the imaging dataset is a multibeam scanning electron microscopic (SEM) image acquired by a multibeam scanning electron microscope] of the wafer 204 (e.g., with different illumination angles, collection angles, etc.) [configured to record the imaging dataset in a plurality of fields of view.].”). Auerbach, Lu, and Tanaka are all in the same field of endeavor (i.e. anomaly detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach, Lu, and Tanaka to teach the above limitation(s). The motivation for doing so is that using SEM images improves the quality of the imaging dataset. Regarding claim 26, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Lu further teaches the imaging dataset is acquired by an imaging device comprising a Helium ion microscope or a cross-beam device. (Lu, ⁋85, “In addition, the output acquisition subsystem may be any other suitable ion beam-based output acquisition subsystem such as those included in commercially available focused ion beam (FIB) systems, helium ion microscopy (HIM) systems [the imaging dataset is acquired by an imaging device comprising a Helium ion microscope or a cross-beam device.], and secondary ion mass spectroscopy (SIMS) systems.”). Auerbach, Lu, and Tanaka are all in the same field of endeavor (i.e. anomaly detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach, Lu, and Tanaka to teach the above limitation(s). The motivation for doing so is that using a Helium ion microscope for generating the imagining dataset improves the quality of the imaging dataset. Regarding claim 27, the claim is similar to claim 1. Auerbach teaches the additional limitations One or more machine-readable hardware storage devices comprising instructions that are executable by one or more processing devices to perform operations (Auerbach, ⁋24, “Apparatus 10 further comprises processing system 60 and a computing system 66 [One or more machine-readable hardware storage devices comprising instructions that are executable by one or more processing devices to perform operations] including a monitor 64.”). Regarding claim 28, the claim is similar to claim 1. Auerbach teaches the additional limitations A system comprising:… and a processing device comprising: one or more processing devices; and one or more machine-readable hardware storage devices comprising instructions that are executable by the one or more processing devices to perform operations (Auerbach, ⁋24, “Apparatus 10 further comprises processing system 60 and a computing system 66 [A system comprising:… and a processing device comprising: one or more processing devices; and one or more machine-readable hardware storage devices comprising instructions that are executable by the one or more processing devices to perform operations] including a monitor 64.”). Regarding claim 29, Auerbach discloses: A computer-implemented method, (Auerbach, ⁋24, “Apparatus 10 further comprises processing system 60 and a computing system 66 [A computer-implemented method,] including a monitor 64.”). comprising: receiving, at a processing device, an imaging dataset of wafers, each image of the imaging dataset comprising a plurality of tiles, the wafers comprising a plurality of semiconductor structures; (Auerbach, ⁋11, “a wafer inspection system generates a set of defects of the wafer, and the system performs an initial filtration of the defects to generate candidate defects for further analysis. The inspection system generates values of image attributes for each of the candidate defects [each image of the imaging dataset comprising a plurality of tiles, the wafers comprising a plurality of semiconductor structures;], the image attributes typically comprising expressions that are functions of measurements of the candidate defects made by the inspection system.” and Auerbach, ⁋26, “In the production phase, surfaces of other wafers are inspected, these wafers being in substantially the same phase of fabrication as already-inspected wafer 26 [comprising: receiving, at a processing device, an imaging dataset of wafers,].”). determining, by the processing device, using an anomaly detection algorithm a plurality of anomalies in the imaging dataset… (Auerbach, ⁋11, “a wafer inspection system generates a set of defects of the wafer, and the system performs an initial filtration of the defects to generate candidate defects for further analysis [determining, by the processing device, using an anomaly detection algorithm a plurality of anomalies in the imaging dataset,…].”). wherein the classification of the plurality of anomalies comprises assigning each of the plurality of anomalies to a corresponding class of a current set of classes into which the anomalies of the plurality of anomalies are binned;… (Auerbach, ⁋28 and Figure 3A, “In a second step 204, typically performed in post-processor 62 after surface 25 has been completely scanned, the alarms are grouped into clusters, and the clusters are classified as defects. Typically one cluster represents one defect. Post-processor 62 performs further filtering of each of the defects according to the properties of each defect, where the properties comprise the number of alarms, also herein termed the volume, in the cluster, and its grade, which is a function of individual alarm grades of each alarm.; clustering and filtering the defects into groups is interpreted as binning anomalies to classes and Figures 3A/3B show bins of different anomaly classes (i.e. wherein the classification of the plurality of anomalies comprises assigning each of the plurality of anomalies to a corresponding class of a current set of classes into which the anomalies of the plurality of anomalies are binned;…)”). While Auerbach teaches detecting anomalies on a wafer using a classification algorithm, Auerbach does not explicitly teach: …and extracting, for each anomaly of the plurality of anomalies, a respective tile of the respective image of the imaging dataset; training, by the processing device, a machine-learning classification algorithm to generate a classification of the plurality of anomalies based on the tiles,… …determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies; and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die. Lu teaches …and extracting, for each anomaly of the plurality of anomalies, a respective tile of the respective image of the imaging dataset; training, by the processing device, a machine-learning classification algorithm to generate a classification of the plurality of anomalies based on the tiles,… (Lu, ⁋45, “The model is applied at 103 using a processor to find one or more anomalies in image patches; image patches are interpreted as tiles (i.e. …and extracting, for each anomaly of the plurality of anomalies, a respective tile of the respective image of the imaging dataset;). The model can generate reconstruction errors and/or probabilities. The model can predict whether a patch is abnormal by examining the patch level reconstruction error and/or probabilities [training, by the processing device, a machine-learning classification algorithm to generate a classification of the plurality of anomalies based on the tiles,…].”). Auerbach and Lu are both in the same field of endeavor (i.e. anomaly detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach and Lu to teach the above limitation(s). The motivation for doing so is that autoencoders improve the robustness and accuracy of anomaly detection (cf. Lu, ⁋55, “An advantage of a variational autoencoder is its latent variables are stochastic variables. Sometimes the nominal and defect data can share the same mean, but their deviations can be different. The variational autoencoder takes into account the distribution difference between an original input and reconstructed data, which improves accuracy and robustness.”). While Auerbach in view of Lu teaches anomaly detection using a machine learning model, the combination does not explicitly teach: …determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies; and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die. Tanaka teaches: …determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies; (Tanaka, ⁋63, “For each wafer (which may be each sampled wafer) flowing on the fabrication line, the inspection results collection and analysis unit 30 receives defects distribution or defects variation data 31 per defect type inspected by the inspection equipment or review station; a defect wafer is interpreted as having a defect die (i.e. …determining, by the processing device, a faulty die in a production line based on the classification of the plurality of anomalies;)”). and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die. (Tanaka, ⁋65, “The fault monitoring unit 36 conveys the fault information received from the inspection results collection and analysis unit 30 to a fabrication line administrator 37 via display and annunciator means 38. On the other hand, the fabrication line management station 34 exerts control such as stopping the on-going fabrication process by the faulty manufacturing equipment and switches the operation to another manufacturing equipment that operates normally, based on the fault information received from the inspection results collection and analysis unit 30 [and adjusting, by the processing device, an operation of the production line, to exclude from the production line a portion of the wafers identified as being produced by the faulty die.].”). Auerbach, in view of Lu, and Tanaka are both in the same field of endeavor (i.e. anomaly detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach, in view of Lu, and Tanaka to teach the above limitation(s). The motivation for doing so is that adjusting the production line when an anomaly is detected prevents further downstream damage that the anomaly could cause (cf. Tanaka, see ⁋3-5). Claims 2-7, 11-14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Auerbach, US Pre-Grant Publication 2006/0115143A1 (“Auerbach”) in view of Lu, et al., US Pre-Grant Publication 2019/0287230A1 (“Lu”) and further in view of Tanaka, et al., US Pre-Grant Publication 2003/0138978A1 (“Tanaka”) and Bakker, et al., US Patent Publication 7283659B1 (“Bakker”). Regarding claim 2, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. While the combination teaches using machine learning to prevent anomalies from performing during semiconductor manufacturing, the combination does not explicitly teach further comprising: determining a similarity measure between the plurality of anomalies. Bakker teaches further comprising: determining a similarity measure between the plurality of anomalies. (Bakker, col. 7 lines 37-44, “Each identifying data group may then be sorted into bins based on the defect images' appearance in any suitable manor. FIGS. 3 through 6 illustrate a natural grouping procedure in accordance with one embodiment of the present invention. In general terms, the defect images of a particular identifying data group are arranged into bins having a similar appearance. Each bin may also be associated with one or more bins that have similar defect images. [further comprising: determining a similarity measure between the plurality of anomalies.]”). Auerbach, in view of Lu and Tanaka, and Bakker are both in the same field of endeavor (i.e. defect detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach, in view of Lu and Tanaka, and Bakker to teach the above limitation(s). The motivation for doing so is that optimizing the organization of the present anomalies improves performance (cf. Bakker, col. 2 lines 28-35, “A conventional image search typically includes comparing the target image to each image within the same class as the target image. Since each class may have thousands of defect images, this search can take a significant amount of time. Accordingly, there is a need for improved mechanisms for organizing defect images, searching through such images, and/or analyzing the results from such a search.”). Regarding claim 3, Auerbach in view of Lu, Tanaka, and Bakker teaches the computer-implemented method of claim 2. Bakker further teaches further comprising: selecting the plurality of anomalies to have a high similarity measure between each other. (Bakker, col. 10 lines 29-36, “The target feature vector is then compared to each centroid of each bin of the determined identifying data group in operation 706. That is, the target image's vector is compared to each bin's centroid vector. If the feature space is a 5×5 matrix, only 25 centroid vectors need to be compared. The one or more bin(s) that nearest to the specified target are then determined [further comprising selecting the plurality of anomalies] in operation 708. In other words, it is determined which bins have the closest appearance to the target image [to have a high similarity measure between each other.]. In one embodiment, the user may specify the number of nearest neighbors. Any suitable technique may be used to determine which bin(s) have the closest appearance to the target image.”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Bakker with the teachings of Auerbach, Lu, and Tanaka for the same reasons disclosed in claim 2. Regarding claim 4, Auerbach in view of Lu, Tanaka, and Bakker teaches the computer-implemented method of claim 2. Bakker further teaches wherein the similarity measure comprises a graphic similarity of semiconductor structures. (Bakker, col. 7 lines 37-44, “Each identifying data group may then be sorted into bins based on the defect images' appearance in any suitable manor. FIGS. 3 through 6 illustrate a natural grouping procedure in accordance with one embodiment of the present invention. In general terms, the defect images of a particular identifying data group are arranged into bins having a similar appearance. Each bin may also be associated with one or more bins that have similar defect images. [wherein the similarity measure comprises a graphic similarity]” and Bakker, col. 1 lines 25, “Semiconductor defects may include structural flaws [graphic similarity of semiconductor structures.]”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Bakker with the teachings of Auerbach, Lu, and Tanaka for the same reasons disclosed in claim 2. Regarding claim 5, Auerbach in view of Lu, Tanaka, and Bakker teaches the computer-implemented method of claim 2. Bakker further teaches further comprising: selecting the plurality of anomalies to have a low similarity measure with respect to the one or more further anomalies. (Bakker, col. 3 lines 29-37, “In a specific implementation, the one or more defect images which have a same appearance as a target defect image are found by finding one or more centroids which most closely match the target defect image's feature vector. The one or more defect images which have a same appearance as a target defect image are from the bins which have an associated centroid which most closely matches the target defect image's feature vector; selecting based on similar centroids is interpreted as there being centroids that are the least similar to the anomaly (i.e. further comprising: selecting the plurality of anomalies to have a low similarity measure with respect to the one or more further anomalies.).”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Bakker with the teachings of Auerbach, Lu, and Tanaka for the same reasons disclosed in claim 2. Regarding claim 6, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of any one of claim 1. While the combination teaches using machine learning to prevent anomalies from performing during semiconductor manufacturing, the combination does not explicitly teach wherein the plurality of anomalies are binned into a single class of the current set of classes. Bakker teaches wherein the plurality of anomalies are binned into a single class of the current set of classes. (Bakker, abstract, “The defect data in each identifying data group is then automatically sorted according to defect appearance. That is, similar defect images are associated with a single bin and similar bins are associated with other similar bins [wherein the plurality of anomalies are binned into a single class of the current set of classes.].”). Auerbach, in view of Lu and Tanaka, and Bakker are both in the same field of endeavor (i.e. defect detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach, in view of Lu and Tanaka, and Bakker to teach the above limitation(s). The motivation for doing so is that optimizing the organization of the present anomalies improves performance (cf. Bakker, col. 2 lines 28-35, “A conventional image search typically includes comparing the target image to each image within the same class as the target image. Since each class may have thousands of defect images, this search can take a significant amount of time. Accordingly, there is a need for improved mechanisms for organizing defect images, searching through such images, and/or analyzing the results from such a search.”). Regarding claim 7, Auerbach in view of Lu, Tanaka, and Bakker teaches the computer-implemented-method of claim 6. Bakker further teaches wherein the single class comprises at least one of an unknown class or a defect class. (Bakker, abstract, “The defect data in each identifying data group is then automatically sorted according to defect appearance [at least one of an unknown class or a defect class.]. That is, similar defect images are associated with a single bin and similar bins are associated with other similar bins [wherein the single class comprises].”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the present application to combine the teachings of Bakker with the teachings of Auerbach, Lu, and Tanaka for the same reasons disclosed in claim 6. Regarding claim 11, Auerbach in view of Lu and Tanaka teaches the computer-implemented method of claim 1. Auerbach further teaches: further comprising: applying at least one decision criterion (Auerbach, ⁋60, “In a fourth step 308, the defects to be reviewed are selected according to a ranking system [further comprising: applying at least one decision criterion] that post-processor 62 applies to the defects.”). and an annotation comprising and an exploitative annotation scheme. (Auerbach, ⁋26, “Apparatus 10 is typically operated in an initial “learning” phase, and subsequently in a production phase. In the learning phase, surface 25 of wafer 26 is inspected, and a human operator 90 of apparatus 10 interacts with the apparatus, using results generated by the apparatus, to iteratively decide values of the results that have a high probability of being generated by yield limiting defects.; the mention of a learning phase is interpreted as having a predefined set of classes therefore an exploitative scheme (i.e. and an annotation comprising and an exploitative annotation scheme.)”). While the combination teaches annotation using a exploitative scheme, the combination does not explicitly teach …an explorative annotation scheme. Bakker teaches …an explorative annotation scheme (Bakker, col. 2 lines 39-44, “for automatically organizing and/or analyzing a plurality of defect images without first providing a predefined set of classified images (herein referred to as a training set). In other words, sorting is not based on a training set or predefined classification codes for such defect images. […an explorative annotation scheme]”). Auerbach, in view of Lu and Tanaka, and Bakker are both in the same field of endeavor (i.e. defect detection). It would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Auerbach, in view of Lu and Tanaka, and Bakker to teach the above limitation(s). The motivation for doing so is that optimizing the organization of the present anomalies improves performance (cf. Bakker, col. 2 lines 28-35, “A conventional image search typically includes comparing the target image to each image within the same class as the target image. Since each class may have thousands of defect images, this search can take a significant amount of time. Accordingly, there is a need for improved mechanisms for organizing defect images, searching through such images, and/or analyzing the results from such a search.”). Regarding claim 12, Auerbach in view of Lu, Tanaka, and Bakker teaches the computer-implemented method of claim 11. Auerbach further teaches wherein the at least one decision criterion differs for at least two iterations of a plurality of iterations. (Auerbach, ⁋59, “in the first iteration six defects from the most extreme bin and two defects from the next-most extreme bin are reviewed. In any subsequent iteration, rather than opening m bins from the extremities of the attr
Read full office action

Prosecution Timeline

Jul 15, 2021
Application Filed
Oct 29, 2024
Non-Final Rejection — §103, §112
Feb 11, 2025
Response Filed
May 21, 2025
Final Rejection — §103, §112
Jul 31, 2025
Request for Continued Examination
Aug 07, 2025
Response after Non-Final Action
Oct 27, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12488244
APPARATUS AND METHOD FOR DATA GENERATION FOR USER ENGAGEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12423576
METHOD AND APPARATUS FOR UPDATING PARAMETER OF MULTI-TASK MODEL, AND STORAGE MEDIUM
2y 5m to grant Granted Sep 23, 2025
Patent 12361280
METHOD AND DEVICE FOR TRAINING A MACHINE LEARNING ROUTINE FOR CONTROLLING A TECHNICAL SYSTEM
2y 5m to grant Granted Jul 15, 2025
Patent 12354017
ALIGNING KNOWLEDGE GRAPHS USING SUBGRAPH TYPING
2y 5m to grant Granted Jul 08, 2025
Patent 12333425
HYBRID GRAPH NEURAL NETWORK
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
47%
Grant Probability
90%
With Interview (+43.1%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 38 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month