Prosecution Insights
Last updated: April 19, 2026
Application No. 18/274,316

SYSTEM AND METHOD FOR MANUFACTURING QUALITY CONTROL USING AUTOMATED VISUAL INSPECTION

Non-Final OA §101§103§DP
Filed
Jul 26, 2023
Examiner
KAUR, JASPREET
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Musashi AI North America Inc.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
13 granted / 16 resolved
+19.3% vs TC avg
Strong +30% interview lift
Without
With
+30.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
47
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
53.2%
+13.2% vs TC avg
§102
7.4%
-32.6% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT PCT/CA2022/050100, filed on January 25, 2022. As well as acknowledgement of priority to provisional application 63141643 with filing date of January 26, 2021. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The information disclosure statement (“IDS”) filed on 07/26/2023 and 05/22/2025 have been reviewed and the listed references have been considered. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: 2230, as mentioned in paragraph 402 "a generative model 2330". The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: Figure 5 reference number 542 "anomaly detection output image", and Figure 11 reference number 1100. The drawings are objected to because Figure 2 contains conflicting reference numbers to referenced parts as described in the specification paragraphs 75-84. Drawings should be updated to reflect reference number as described in the specification. Figure 15 indicates the golden sample module with reference 1626, while specification paragraph 277 states "a golden sample module 1526". Figure 17 indicates all threads of embodiments 3 and 4 as 1726, while specification paragraph 299 indicates embodiment 3 threads with 1726, 1728, and 1730 and paragraph 301 indicates embodiment 4 threads as 1740,1742, and 1744. Figure 8 and specification described input images 816,818,820,822,824,826 which are unclear and difficult to view within the provided figure. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The specification is objected to because of the following informalities: In paragraph 145 "client device 338…" should be "client device 310" according to Figure 4. In paragraph 145 "…including display 346" Figure 4 does not show the display device 346 should be "… including analytics interface 320" In paragraph 183 "the defect classification model 524…" should be "the defect classification model 522…" In paragraph 209 "the encoder 806…" should be "the encoder 906…" In paragraph 209 "Figure 9A shows a second example…" should be "Figure 9B shows a second example…" In paragraph 276 "the defect classification module 1520…" should be "the defect classification module 520 or 1620…" In paragraph 288 "the object detection module 1620…" should be "the object detection module 1604" In paragraph 301 "the defect classification stage 1736…" should be "the defect classification stage 1733…" According to 37 CFR 1.71, MPEP §§ 608.01, 2161, and 2162, the specification must be in such particularity as to enable any person skilled in the pertinent art or science to make and use the invention without involving extensive experimentation and must clearly convey enough information about the invention to show that applicant invented the subject matter that is claimed. An applicant is ordinarily permitted to use his or her own terminology, as long as it can be understood. Necessary grammatical corrections, are required. Reference characters must be properly applied, no single reference character being used for two different parts or for a given part and a modification of such part. See 37 CFR 1.84(p). Every feature specified in the claims must be illustrated, but there should be no superfluous illustrations. A substitute specification in proper idiomatic English and in compliance with 37 CFR 1.52(a) and (b) is required. The substitute specification filed must be accompanied by a statement that it contains no new matter. Election/Restrictions Applicant’s election, without traverse, of Group I (i.e., Claims 1-13, 15-16, and 21-22) in the Response to Election Requirement filed on September 6, 2025 is acknowledged. Therefore, the present Office Action, only Claims 1-13, 15-16, and 21-22 are being analyzed. Claims 17-19 have been withdrawn from consideration as non-elected claims. Claims 14 and 20 are cancelled. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 UFR 3.73(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-13, 15-16, and 21-22 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims of co-pending Application No. 18/552,591. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the instant application are obvious variants of the corresponding ones in the co-pending application, in view of Dou et al. (JP2020119135A - Translation from Espacenet) , and in further view of Gupta et al. (US 2018/0293721 A1). This is a provisional nonstatutory obviousness-type double patenting rejection because the patentably indistinct claims have not in fact been patented. For example, the following is a chart comparing claim 1 of the instant application to the combination of claims 1-5, and 7 of the co-pending application number 18/552,591, in view of Dou, and in further view of Gupta: Instant application: 18/402,528 U.S. Application No.: 18/552,591 Although the co-pending application 18/552,591 discloses a detecting defect type and comparing the results of an object detection model with the output of the golden sample analysis model, the co-pending application does not discloses “identifying a location of a detected object in the inspection image using the object detection model; providing the inspection image as input to a first convolutional neural network ("CNN") and generating an inspection image feature map using the first CNN; providing a golden sample image of the article as input to a second CNN and generating a golden sample feature map using the second CNN” and “determining whether the artifact location data matches the object location data according to predetermined match criteria”. However, in an analogous field of endeavor Dou discloses “identifying a location of a detected object in the inspection image using the object detection model (Dou paragraph [0059] "The image evaluation device 10 according to the present invention is, in short, a device that uses a machine learning model and defect area information to evaluate an unknown defect image and determine whether or not it can be identified, and is an image evaluation device that identifies defect information (type, position, etc.) in a defect image of an electronic device using a classifier 102 based on machine learning")” and “determining whether the artifact location data matches the object location data according to predetermined match criteria (Dou paragraph [0048] "an unknown defect that is not included in the teaching data, the deviation between the area focused on by the machine learning model 102 and the defect area is large, so it can be said that the value of the center distance 204 is large. Because of this tendency regarding the center distance, the center distance 204 can be used as the evaluation result 106 of the image evaluation device 1 as it is. Alternatively, it is also possible to output the result of comparison between the center distance 204 and a certain threshold value (for example, OK or NG) as the evaluation result 106 ")”. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the visual inspection system using object detection and golden sample analysis module as taught by the co-pending application 18/552,591 to include a comparison of the two results based on location of the detected defect as taught by Dou. The suggestion/motivation for doing so would have been that there is a need in the field of defect detection for improved accuracy in identify defects “the classification accuracy varies greatly depending on the feature values selected, so the method for selecting the feature values is important” as noted by Dou paragraph 4. However, the combination of the co-pending application 18/552,591 and Dou does not teach “providing the inspection image as input to a first convolutional neural network ("CNN") and generating an inspection image feature map using the first CNN; providing a golden sample image of the article as input to a second CNN and generating a golden sample feature map using the second CNN”. Gupta teaches “providing the inspection image as input to a first convolutional neural network ("CNN") (Gupta paragraph [0061] "The second learning based model is configured for generating actual contours for the patterns in at least one of the acquired images of the patterns formed on the specimen input to the second learning based model by the one or more computer subsystems") and generating an inspection image feature map using the first CNN (Gupta paragraph [0079] "the first and/or second learning based models include one or more fully connected layers. A "fully connected layer" may be generally defined as a layer in which each of the nodes is connected to each of the nodes in the previous layer. The fully connected layer(s) may perform classification based on the features extracted by convolutional layer(s), which may be configured as described further herein. The fully connected layer(s) are configured for feature selection and classification. In other words, the fully connected layer(s) select features from a feature map and then classify properties in the image(s) based on the selected features. The selected features may include all of the features in the feature map (if appropriate) or only some of the features in the feature map"); providing a golden sample image of the article as input (Gupta paragraph [0061] "The first learning based model is configured for generating simulated contours for the patterns based on a design for the specimen input to the first learning based model by the one or more computer subsystems, and the simulated contours are expected contours of a defect free version of the patterns in the images of the specimen generated by the imaging subsystem") to a second CNN and generating a golden sample feature map using the second CNN (Gupta paragraph [0079] "the first and/or second learning based models include one or more fully connected layers. A "fully connected layer" may be generally defined as a layer in which each of the nodes is connected to each of the nodes in the previous layer. The fully connected layer(s) may perform classification based on the features extracted by convolutional layer(s), which may be configured as described further herein. The fully connected layer(s) are configured for feature selection and classification. In other words, the fully connected layer(s) select features from a feature map and then classify properties in the image(s) based on the selected features. The selected features may include all of the features in the feature map (if appropriate) or only some of the features in the feature map")”. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the visual inspection system using object detection and golden sample analysis module as taught by the co-pending application 18/552,591 to include a comparison of the two results based on location of the detected defect as taught by Dou. The suggestion/motivation for doing so would have been that there is a need in the field of defect detection for improved accuracy in identify defects “the classification accuracy varies greatly depending on the feature values selected, so the method for selecting the feature values is important” as noted by Dou paragraph 4. However, the combination of the co-pending application 18/552,591 and Dou does not teach “providing the inspection image as input to a first convolutional neural network ("CNN") and generating an inspection image feature map using the first CNN; providing a golden sample image of the article as input to a second CNN and generating a golden sample feature map using the second CNN”. Gupta teaches “providing the inspection image as input to a first convolutional neural network ("CNN") (Gupta paragraph [0061] "The second learning based model is configured for generating actual contours for the patterns in at least one of the acquired images of the patterns formed on the specimen input to the second learning based model by the one or more computer subsystems") and generating an inspection image feature map using the first CNN (Gupta paragraph [0079] "the first and/or second learning based models include one or more fully connected layers. A "fully connected layer" may be generally defined as a layer in which each of the nodes is connected to each of the nodes in the previous layer. The fully connected layer(s) may perform classification based on the features extracted by convolutional layer(s), which may be configured as described further herein. The fully connected layer(s) are configured for feature selection and classification. In other words, the fully connected layer(s) select features from a feature map and then classify properties in the image(s) based on the selected features. The selected features may include all of the features in the feature map (if appropriate) or only some of the features in the feature map"); providing a golden sample image of the article as input (Gupta paragraph [0061] "The first learning based model is configured for generating simulated contours for the patterns based on a design for the specimen input to the first learning based model by the one or more computer subsystems, and the simulated contours are expected contours of a defect free version of the patterns in the images of the specimen generated by the imaging subsystem") to a second CNN and generating a golden sample feature map using the second CNN (Gupta paragraph [0079] "the first and/or second learning based models include one or more fully connected layers. A "fully connected layer" may be generally defined as a layer in which each of the nodes is connected to each of the nodes in the previous layer. The fully connected layer(s) may perform classification based on the features extracted by convolutional layer(s), which may be configured as described further herein. The fully connected layer(s) are configured for feature selection and classification. In other words, the fully connected layer(s) select features from a feature map and then classify properties in the image(s) based on the selected features. The selected features may include all of the features in the feature map (if appropriate) or only some of the features in the feature map")”. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the visual inspection system using location of the defect with object detection and golden sample analysis module to determine defects as taught by the co-pending application 18/552,591 and Dou to include a convolutional neural network to extract features as taught by Gupta. The suggestion/motivation for doing so would have been that “The currently used methods for hot spot detection have a number of disadvantages. For example, the currently used methods have no flexibility to automatically adapt to different pattern types (i.e., memory or logic). In addition, the currently used methods have no generalization to different image modalities. In an additional example, the currently used methods require hand crafted (heuristics) models of image modalities to characterize pattern variation and bias. In a further example, the currently used methods provide no quantitative pattern characterization and hot spot detection. Instead, the currently used methods report CD or other pattern fidelity metrics for the entire field of view from a single shot measurement” as noted by Gupta paragraph 12. Therefore, it would have been obvious to combine the disclosure of co-pending application 18/552,591 and Dou with the Gupta disclosure to obtain the invention as specified in the instant application claim 1 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claims 2-13, 15-16, and 21-22 are similarly rejected under nonstatutory obviousness-type double patenting as being unpatentable over the combination of claims of U.S. application 18/552,591, in view of Dou, and in further view of Gupta. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13, 15-16, and 21-22 are rejected under 35 U.S.C. 101, based on abstract idea. The claims recite method and system for automatically analyzing manufactured articles for defects. With respect to independent system claim 9: STEP 1: Do the claims fall within one of the statutory categories? YES. Claim 9 is directed to a system i.e., a device or a machine. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental process (i.e., abstract idea). The limitation “generating object location data identifying a location of a detected object in the inspection image” and “comparing the inspection image to a golden sample image to identify an artifact in the inspection image corresponding to a difference between the inspection image and the golden sample image, wherein the artifact is defined by artifact location data describing a location of the artifact in the inspection image; and determining whether the artifact location data matches the object location data according to predetermined match criteria” as drafted, recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind of a person, i.e., concepts performed in the human mind (including observation, evaluation, judgement, opinion). As such, a person could inspect an image of an article and identify a location and type of defect photographed, and another visual analysis can be performed comparing an image of a non-defected article with the inspection image and identify the location of the defect that is present in the inspection image and not in the non-defected image. Further a person could compare the location found during initial inspection with the location found when comparing to a non-defected image to determine whether the two identified location of the defect are the same for an article or an object with a degree of error or lack thereof either mentally or using a pen and paper. The mere nominal recitation that the various steps are being executed by a processor (e.g., processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. The additional elements “an input interface for receiving an inspection image of the article” and “provide an inspection image of the article as input” are recited as mere data gathering, which may not be considered as an element which integrates the above-listed identified abstract idea into a practical application per MPEP 2106.05(g). The additional elements “provide an inspection image of the article as input” and “an object detection model trained to detect at least one defect type in an input image” are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). See also MPEP 2106.04(a)(2)(III) with respect to Mental Processes: “Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer”. See also MPEP 2106.04(a)(2)(III)(C)(3) Using a computer as tool to perform a mental process and MPEP 2106.04(a)(2)(III)(D) as well as the case law cited therein. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional step/element/limitation amounts to no more than an abstract idea performed on a computer. The additional elements are simply appending well-understood routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC) per MPEP 2106.05(d) and 2106.07(a)(III). Therefore, claim 9 is not patent eligible. In addition, the elements of independent claim 1 are analyzed in the same manner as claim 9. The additional element recited in claim 1, i.e., “providing the inspection image as input to a first convolutional neural network ("CNN") and generating an inspection image feature map using the first CNN; providing a golden sample image of the article as input to a second CNN and generating a golden sample feature map using the second CNN”, which recited at a high level of generality and merely equate to “uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f), as well as mere data gathering, per MPEP 2106.05(g).Therefore independent claim 1is not patent eligible, either. Similar analysis is made for the dependent claims 2-13, 15-16, and 21-22, under their broadest reasonable interpretation are identified as: being either directed towards mere data gathering or an abstract idea, mental process and mathematical calculation, and not reciting additional elements that integrate the judicial exception into a practical application, and not reciting additional elements that amount to significantly more than the judicial exception. For all of the above reasons, claims 1-2-13, 15-16, and 21-22 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, claims 1-13, 15-16, and 21-22 are not eligible subject matter under 35 U.S.C 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 6, 8-10, 16, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Dou et al. (JP2020119135A - Translation from Espacenet) in view of Gupta et al. (US 2018/0293721 A1). Regarding claim 1, Dou teaches “A method of automated visual inspection of an article (Dou paragraph [0036] "An image evaluation device and method according to a first embodiment of the present invention shows an example in which the image evaluation device and method are applied to the semiconductor inspection device described above"), the method comprising: providing an inspection image of the article as input to an object detection model trained to detect at least one defect type in an input image and generating object location data identifying a location of a detected object in the inspection image using the object detection model (Dou paragraph [0059] "The image evaluation device 10 according to the present invention is, in short, a device that uses a machine learning model and defect area information to evaluate an unknown defect image and determine whether or not it can be identified, and is an image evaluation device that identifies defect information (type, position, etc.) in a defect image of an electronic device using a classifier 102 based on machine learning"); determining whether the artifact location data matches the object location data according to predetermined match criteria (Dou paragraph [0048] "an unknown defect that is not included in the teaching data, the deviation between the area focused on by the machine learning model 102 and the defect area is large, so it can be said that the value of the center distance 204 is large. Because of this tendency regarding the center distance, the center distance 204 can be used as the evaluation result 106 of the image evaluation device 1 as it is. Alternatively, it is also possible to output the result of comparison between the center distance 204 and a certain threshold value (for example, OK or NG) as the evaluation result 106 "). However, Dou is not relied on to teach “providing the inspection image as input to a first convolutional neural network ("CNN") and generating an inspection image feature map using the first CNN; providing a golden sample image of the article as input to a second CNN and generating a golden sample feature map using the second CNN; comparing the inspection image to a golden sample image to identify an artifact in the inspection image corresponding to a difference between the inspection image and the golden sample image, wherein the artifact is defined by artifact location data”. Gupta teaches “providing the inspection image as input to a first convolutional neural network ("CNN") (Gupta paragraph [0061] "The second learning based model is configured for generating actual contours for the patterns in at least one of the acquired images of the patterns formed on the specimen input to the second learning based model by the one or more computer subsystems") and generating an inspection image feature map using the first CNN (Gupta paragraph [0079] "the first and/or second learning based models include one or more fully connected layers. A "fully connected layer" may be generally defined as a layer in which each of the nodes is connected to each of the nodes in the previous layer. The fully connected layer(s) may perform classification based on the features extracted by convolutional layer(s), which may be configured as described further herein. The fully connected layer(s) are configured for feature selection and classification. In other words, the fully connected layer(s) select features from a feature map and then classify properties in the image(s) based on the selected features. The selected features may include all of the features in the feature map (if appropriate) or only some of the features in the feature map"); providing a golden sample image of the article as input (Gupta paragraph [0061] "The first learning based model is configured for generating simulated contours for the patterns based on a design for the specimen input to the first learning based model by the one or more computer subsystems, and the simulated contours are expected contours of a defect free version of the patterns in the images of the specimen generated by the imaging subsystem") to a second CNN and generating a golden sample feature map using the second CNN (Gupta paragraph [0079] "the first and/or second learning based models include one or more fully connected layers. A "fully connected layer" may be generally defined as a layer in which each of the nodes is connected to each of the nodes in the previous layer. The fully connected layer(s) may perform classification based on the features extracted by convolutional layer(s), which may be configured as described further herein. The fully connected layer(s) are configured for feature selection and classification. In other words, the fully connected layer(s) select features from a feature map and then classify properties in the image(s) based on the selected features. The selected features may include all of the features in the feature map (if appropriate) or only some of the features in the feature map"); comparing the inspection image to a golden sample image to identify an artifact in the inspection image corresponding to a difference between the inspection image and the golden sample image (Gupta paragraph [0112] "FIG. 7, rendered contours 706 and extracted contours 714 may be input to design and image contours comparison step 716 in which the contours rendered from the design are compared to the contours extracted from the image. One or more measurement algorithms may then he used to perform CD measurements at the locations of the potential hot spots"), wherein the artifact is defined by artifact location data (Gupta paragraph [0097] "Hot spots" can be generally defined as locations in a design that when printed on a specimen are more prone to defects than other locations in the design. For example, as the parameters of a fabrication process used to form patterned features on a specimen drift farther away from nominal (i.e., towards the edges of the process window), defects may appear at locations of "hot spots" on the specimen before other locations on the specimen. Therefore, it may be advantageous to identify hot spots in a design so that the corresponding locations on a specimen are monitored more closely for defects, which can enable early detection of a process problem")”. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine the visual inspection system using object detection to determine the type and location of the defect as taught by Dou to include template matching analysis for defect detection as taught by Gupta. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention of the instant application to combine a method of inspecting for defects using object detection as taught by Dou to include the method of template matching as taught by Gupta because such a combination is the result of applying known techniques to a known device ready for improvement to yield predictable results. More specially, comparing results of two models permits the overall model to be more accurate and ensure no false positives or false negatives occur. This known benefit is applicable to a visual inspection device for ensuring no defects are present in manufactured articles, and current method “for hot spot detection have a number of disadvantages. For example, the currently used methods have no flexibility to automatically adapt to different pattern types (i.e., memory or logic). In addition, the currently used methods have no generalization to different image modalities. In an additional example, the currently used methods require hand crafted (heuristics) models of image modalities to characterize pattern variation and bias. In a further example, the currently used methods provide no quantitative pattern characterization and hot spot detection. Instead, the currently used methods report CD or other pattern fidelity metrics for the entire field of view from a single shot measurement” as noted by Gupta paragraph 12. The method of object detection and template matching both share characteristic and capabilities, namely, they are directed towards detecting defects of an article using images. Therefore, it would have been recognized that combining an object detection model and template matching model and comparing the results of the two models would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to combine an object detection model and template matching model to detect defects and (ii) the benefits of such a combination would have been recognized by those ordinary skilled in the art. Therefore, it would have been obvious to combine the disclosure of Dou with the Gupta disclosure to obtain the invention as specified in claim 1 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 9 recites a system with elements corresponding to the steps of the method recited in claim 1. Therefore, the recited elements of the device of claim 15 are mapped to the proposed combination in the same manner as the corresponding steps of the method claim 1. Additionally, the rationale and motivation to combine Dou and Gupta presented in rejection of claim 1, apply to this claim. Additionally the combination of Dou and Gupta teaches “A computer system for automated visual inspection of an article, the system comprising: an input interface for receiving an inspection image of the article (Dou paragraph [0037] “FIG. 1 shows an example of an image evaluation device 10 equipped with an image evaluation unit 1, which inputs a defect image 101 and defect area information 105 and obtains an evaluation result 106 by referring to a machine learning model 102”), at least one processor, and a memory in communication with the processor (Gupta paragraph [0046] “the term "computer system" may be broadly defined to encompass any device having one or more processors, which executes instructions from a memory medium. The computer subsystem(s) or system(s) may also include any suitable processor known in the art such as a parallel processor”)”. Regarding claim 2 (similarly claim 10), the combination of Dou and Gupta teaches “The method of claim 1, further comprising confirming the detected object as a defect if the artifact location data matches the object location data (Dou paragraph [0048] "an unknown defect that is not included in the teaching data, the deviation between the area focused on by the machine learning model 102 and the defect area is large, so it can be said that the value of the center distance 204 is large. Because of this tendency regarding the center distance, the center distance 204 can be used as the evaluation result 106 of the image evaluation device 1 as it is. Alternatively, it is also possible to output the result of comparison between the center distance 204 and a certain threshold value (for example, OK or NG) as the evaluation result 106 ").” Regarding claim 6, the combination of Dou and Gupta teaches “The method of claim 1, wherein the golden sample image is a reference image representing a clean image of the article (Gupta paragraph [0061] "The first learning based model is configured for generating simulated contours for the patterns based on a design for the specimen input to the first learning based model by the one or more computer subsystems, and the simulated contours are expected contours of a defect free version of the patterns in the images of the specimen generated by the imaging subsystem").” The proposed combination as well as the motivation for combining Dou and Gupta references presented in the rejection of claim 1, applies to claim 6. Finally the method recited in claim 6 is met by Dou and Gupta. Regarding claim 8 (similarly claim 16), the combination of Dou and Gupta teaches “The method of claim 1, further comprising generating a first defect class label for the detected object using the object detection model (Dou paragraph [0059] "The image evaluation device 10 according to the present invention is, in short, a device that uses a machine learning model and defect area information to evaluate an unknown defect image and determine whether or not it can be identified, and is "an image evaluation device that identifies defect information (type, position, etc.) in a defect image of an electronic device using a classifier 102 based on machine learning"), providing at least a portion on the inspection image containing the detected object as input to a classification model, generating a second defect class label for the detected object using the classification model, and confirming the first defect class label if the first defect class label matches the second defect class label (Dou paragraph [0086] "if the model 102 is an image classification model, the probability value belonging to each class is output, and if the highest probability among them is smaller than the threshold value, it is determined to be an unknown defect").” Regarding claim 21, the combination of Dou and Gupta teaches “The method of claim 1, wherein comparing the inspection image to the golden sample image includes comparing an inspection image feature map (Gupta paragraph [0080] "The convolutional layer(s) may have any suitable configuration known in the art and are generally configured to determine features for an image as a function of position across the image (i.e., a feature map) by applying a convolution function to the input image using one or more filters") of the inspection image to a golden sample feature map of the golden sample image (Gupta paragraph [0112] "FIG. 7, rendered contours 706 and extracted contours 714 may be input to design and image contours comparison step 716 in which the contours rendered from the design are compared to the contours extracted from the image").” The proposed combination as well as the motivation for combining Dou and Gupta references presented in the rejection of claim 1, applies to claim 21. Finally the method recited in claim 21 is met by Dou and Gupta. Regarding claim 22, the combination of Dou and Gupta teaches “The method of claim 21, further comprising generating the inspection image feature map using a first convolutional neural network ("CNN") and generating the golden sample image feature map using a second CNN (Gupta paragraph [0079] "the first and/or second learning based models include one or more fully connected layers. A "fully connected layer" may be generally defined as a layer in which each of the nodes is connected to each of the nodes in the previous layer. The fully connected layer(s) may perform classification based on the features extracted by convolutional layer(s), which may be configured as described further herein. The fully connected layer(s) are configured for feature selection and classification. In other words, the fully connected layer(s) select features from a feature map and then classify properties in the image(s) based on the selected features. The selected features may include all of the features in the feature map (if appropriate) or only some of the features in the feature map").” The proposed combination as well as the motivation for combining Dou and Gupta references presented in the rejection of claim 1, applies to claim 22. Finally the method recited in claim 22 is met by Dou and Gupta. Claims 3-5, 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Dou and Gupta in further view of Gokturk et al. (US 2007/0258645 A1). Regarding claim 3 (similarly claim 11), the combination of Dou and Gupta teaches “The method of claim 1, further comprising displaying the detected artifact via a user interface if the artifact location data does not match the object location data (Dou paragraph [0056] "the evaluation result display in FIG. 9, a defect image 101, defect area information 105, attention image 104, and an evaluation result 106 in which these images are overlapped are displayed on the display screen of the evaluation result display unit 5"), wherein(Dou paragraph [0086] "It is also conceivable to determine whether the defect image 101 is an unknown defect based on the output value calculated by the model 102") or (Gokturk paragraph [0081] "The user is then presented with the accepted recognition results. The user may then provide input in the form of indicating whether individual results are correct or Incorrect").” However, the combination of Dou and Gupta is not relied on to teach “wherein the user interface is configured to receive input data from a user indicating whether the detected artifact is an […] anomaly ”. In an analogous field of endeavor, Gokturk teaches “wherein the user interface is configured to receive input data from a user indicating whether the detected artifact is an […] anomaly (Gokturk paragraph [0081] "The user is then presented with the accepted recognition results. The user may then provide input in the form of indicating whether individual results are correct or Incorrect")”. It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine an visual inspection method using object detection and template matching as by Dou and Gupta to include user feedback on the detected defect as taught by Gokturk. The suggestion/motivation for doing so would have been that in the field of detection/recognition there is a need for the output to be accurate, "The recognition process generates correct results as well as errors, which are undesirable. An acceptance threshold is applied to the confidence score of each result to determine which results to accept and which to reject" as noted by the Gokturk disclosure in paragraph 81. Therefore, it would have been obvious to combine the disclosure of Dou and Gupta with the Gokturk disclosure to obtain the invention as specified in claim 3 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Regarding claim 4 (similarly claim 12), the combination of Dou, Gupta, and Gokturk teaches “The method of claim 3, further comprising receiving input data from the user indicating (Gokturk paragraph [0081] "The user is then presented with the accepted recognition results. The user may then provide input in the form of indicating whether individual results are correct or Incorrect") the detected artifact is an anomalous defect (Dou paragraph [0086] "It is also conceivable to determine whether the defect image 101 is an unknown defect based on the output value calculated by the model 102") and tagging the inspection image as an object detection training sample (Gokturk paragraph [0082] "Incorrect results may be added to a set of error examples. The set of error examples is used in tum along with the training set and potentially additional user supplied examples to run recognition again and increase the number of recognized faces").” The proposed combination as well as the motivation for combining Dou, Gupta, and Gokturk references presented in the rejection of claim 3, applies to claim 4. Finally the method recited in claim 4 is met by Dou, Gupta, and Gokturk. Regarding claim 5 (similarly claim 13), the combination of Dou, Gupta, and Gokturk teaches “The method of claim 4, further comprising initiating a retraining of the object detection model using the object detection training sample (Gokturk paragraph [0082] "Incorrect results may be added to a set of error examples. The set of error examples is used in tum along with the training set and potentially additional user supplied examples to run recognition again and increase the number of recognized faces").” The proposed combination as well as the motivation for combining Dou, Gupta, and Gokturk references presented in the rejection of claim 3, applies to claim 5. Finally the method recited in claim 5 is met by Dou, Gupta, and Gokturk. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Dou and Gupta in further view of Zhang et al. (US 2017/0148226 A1). Regarding claim 7, the combination of Dou and Gupta teaches the method of claim 1. However, the combination of Dou and Gupta is not relied on to teach “generating the golden sample image from the inspection image using a generative machine learning model”. In an analogous field of endeavor, Zhang teaches “generating the golden sample image from the inspection image (Zhang paragraph [0104] "can be used to generate a "golden" or "standard" reference to improve die-to-database defect detection algorithms for mask and wafer inspection and metrology") using a generative machine learning model (Zhang paragraph [0103] "FIG. 6, the encoder side of the generative model may be configured as described herein")”. It would have been obvious to a person having ordinary skill in the art before effective filing date of the claimed invention of the instant application to combine an visual inspection method using object detection and template matching as by Dou and Gupta to include using a generative model to generate the golden image as taught by Zhang. The suggestion/motivation for doing so would have been “In this manner, a "golden" or "standard" reference, i.e., the simulated image(s), may be generated from the design information for the specimen. Having the capability to generate and use such a "golden" or "standard" reference may be particularly important in instances in which the processes used to form the design information on the specimen cannot be predicted at all or very well using a forward or discriminative model” as noted by the Zhang disclosure in paragraph 105. Therefore, it would have been obvious to combine the disclosure of Dou and Gupta with the Zhang disclosure to obtain the invention as specified in claim 7 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Regarding claim 15, the combination of Dou, Gupta, and Zhang teaches “The system of claim 9,wherein the golden sample image is a reference image representing a clean image of the article (Gupta paragraph [0061] "The first learning based model is configured for generating simulated contours for the patterns based on a design for the specimen input to the first learning based model by the one or more computer subsystems, and the simulated contours are expected contours of a defect free version of the patterns in the images of the specimen generated by the imaging subsystem"), and wherein the at least one processor is further configured to generate the golden sample image from the inspection image (Zhang paragraph [0104] "can be used to generate a "golden" or "standard" reference to improve die-to-database defect detection algorithms for mask and wafer inspection and metrology") using a generative machine learning model (Zhang paragraph [0103] "FIG. 6, the encoder side of the generative model may be configured as described herein").” The proposed combination as well as the motivation for combining Dou, Gupta, and Zhang references presented in the rejection of claim 7, applies to claim 15. Finally the system recited in claim 15 is met by Dou, Gupta, and Zhang. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Publication 2020/0134800 A1 to Hu et al. discloses a visual inspection method and system of comparing a template, gold sample image, with an inspection image to determine a defected area. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASPREET KAUR whose telephone number is (571)272-5534. The examiner can normally be reached Monday - Friday 7:30 am - 4:00 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASPREET KAUR/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Jul 26, 2023
Application Filed
Mar 13, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596301
RETICLE INSPECTION AND PURGING METHOD AND TOOL
2y 5m to grant Granted Apr 07, 2026
Patent 12555199
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM, WITH SYNTHESIS OF TWO INFERENCE RESULTS ABOUT AN IDENTICAL FRAME AND WITH INITIALIZING OF RECURRENT INFORMATION
2y 5m to grant Granted Feb 17, 2026
Patent 12513319
END-TO-END INSTANCE-SEPARABLE SEMANTIC-IMAGE JOINT CODEC SYSTEM AND METHOD
2y 5m to grant Granted Dec 30, 2025
Patent 12427606
SYSTEMS AND METHODS FOR NON-DESTRUCTIVELY TESTING STATOR WELD QUALITY AND EPOXY THICKNESS
2y 5m to grant Granted Sep 30, 2025
Patent 12421641
LAUNDRY TREATMENT APPLIANCE AND METHOD OF USING THE SAME ACCORDING TO MATCHED LAUNDRY LOADS
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+30.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month