DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
In response to applicant’s amendment received on 3/2/26, all requested changes to the claims have been entered. Claims 1-9 and 13-23 were previously and are currently pending.
Response to Arguments
Applicant's arguments filed 3/2/26 have been fully considered but they are not persuasive.
Regarding independent claims 1, 13 and 21, the applicant argues that the prior art of Checka (US2022/0309637) does not disclose the limitation “a trainable feature classifier processing the plurality of visual features extracted by the feature extraction model”.
The Examiner disagrees and the rejection is herein maintained. Checka, see figure 1, elements 102, 104, figures 4, 5 and paragraphs 30, 35, 39-41, 45 and 48, which discloses an image processing model (102) that extracts subsets of pixels from an image, referred to as patches, corresponding to “visual features” based on the broadest reasonable interpretation of that phrase which could mean any features of the visual image. Those image patches (i.e. visual features extracted by the image processing model (102)) are fed into the defect presence classifier (104) (i.e. “trainable feature classifier”) which uses the image patches to identify which patches or regions of interest correspond to a defect in the circuit under inspection.
Additionally the applicant appears to also argue that the visual features are features extracted by SIFT/SURF/ORB, however this is not specifically disclosed in the any of the claims, especially not independent claims 1, 13 and 21. Dependent claims 8 and 22 do state that the feature extraction model implements at least one of SIFT, SURF, KAZE, AKAZE, ORB or BRISK, but not that the visual features come directly from those. In other words, claims 8 and 22 only disclose that at least one of SIFT, SURF, KAZE, AKAZE, ORB or BRISK are implemented as part of the extraction model.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 6-9, 13-15 and 21-23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2022/0309637 to Checka et al. (“Checka”).
Regarding claim 1, Chou discloses a system comprising:
a memory (Fig. 14, element 1408; paragraph 72); and
a processing device, operatively coupled with the memory (Fig. 14, element 1406; paragraph 73), to perform operations comprising:
receiving an image of a substrate of an electronic device (Fig. 1, elements 108 and 110; paragraph 29, wherein an image of a substrate being inspected (108) and a reference image of a substrate (110) are received);
extracting, by a feature extraction model processing the image, a plurality of visual features from the image (Fig. 1, element 102; Figs. 4, 5; paragraphs 30, 35, 39 and 40, wherein image subsets or patches (i.e. plurality of visual features) comprising potential defects are extracted via image processing stage/model (102) that comprises key point detection, registration, subtraction and filtering); and
identifying, by a trainable feature classifier processing the plurality of visual features extracted by the feature extraction model, a region of interest corresponding to an electronic circuit associated with performance of the electronic circuit (Fig. 1, element 104; paragraphs 41, 45 and 48, wherein defect presence classifier (i.e. trainable feature classifier, for example CNN or ResNet) processes the image subset/patches extracted by the image processing stage/model (i.e. feature extraction model) to identify a region thereof corresponding to a defect in the electronic circuit that would be associated with performance of the circuit (e.g. “short” as seen in figure 11)).
Regarding claim 2, Checka discloses the system of claim 1, the operations further comprise:
in view of the region of interest, identifying a defect that leads to a failure of the electronic device (Fig. 1, element 106; Fig. 11, element 1110; paragraphs 59-61, wherein defect characterization stage (106) comprises a trainable defect classification model (e.g. R-CNN, YOLO, etc.) that processes the image subset/patch (i.e. region of interest) to identify a type, location and size of a defect (e.g. “short”) that would lead to a failure of the circuit).
Regarding claim 3, Checka discloses the system of claim 1, wherein the operations further comprise:
determining, by a trainable defect classification model processing a subset of the plurality of visual features associated with the region of interest, a type of a defect associated with the region of interest (Fig. 1, element 106; Fig. 11, element 1110; paragraphs 59-61, wherein defect characterization stage (106) comprises a trainable defect classification model (e.g. R-CNN, YOLO, etc.) that processes the image subset/patch to determine a type, location and size of a defect).
Regarding claim 6, Checka discloses the system of claim 1, wherein identifying the region of interest further comprises:
identifying a plurality of candidate regions in the image (Fig. 1, element 102; Figs. 4, 5; paragraphs 35, 39 and 40, wherein image subsets or patches (i.e. plurality of visual features) comprising potential defects are extracted via image processing stage/model (102)); and
identifying the region of interest among the plurality of candidate regions (Fig. 1, element 104; paragraphs 41 and 45, wherein defect presence classifier (i.e. trainable feature classifier, for example CNN) processes the image subset/patches to identify a region thereof corresponding to a defect in the electronic circuit).
Regarding claim 7, Checka discloses the system of claim 1, wherein the feature extraction model is trainable (Fig. 1, element 102; Figs. 4, 5; paragraphs 30-40, wherein image processing stage/model (102) (i.e. feature extraction model) comprises key point detection, registration, subtraction and filtering, all of which are “trainable” in that they are prepared for their individual tasks that results in the image subset/patches (i.e. plurality of visual features) comprising potential defects).
Regarding claim 8, Checka discloses the system of claim 1, wherein the feature extraction model implements AT LEAST ONE OF: Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), KAZE, Accelerated-KAZE (AKAZE), Oriented FAST and Rotated BRIEF (ORB), OR Binary Robust Invariant Scalable Keypoints (BRISK) (paragraph 31, wherein image processing stage/model (102) (i.e. feature extraction model) implements at least one of SIFT, SURF or ORB).
Regarding claim 9, Checka discloses the system of claim 1, wherein the trainable feature classifier implements AT LEAST ONE OF: Perceptron, Naive Bayes, Decision Tree, Logistic Regression, K-Nearest Neighbor, Artificial Neural Networks, Deep Learning, OR Support Vector Machine (Fig. 1, element 104; paragraphs 41 and 45, wherein defect presence classifier (i.e. trainable feature classifier) can be implemented as a convolutional neural network which corresponds to at least an “Artificial Neural Network” or “Deep Learning”).
Regarding claim 13, Checka discloses a non-transitory computer readable medium comprising instructions (Fig. 14, element 1408; paragraph 72), which when executed by a processor (Fig. 14, element 1406; paragraph 73), cause the processor to perform operations comprising:
receiving an image of a semiconductor substrate of an electronic device (Fig. 1, elements 108 and 110; paragraph 29, wherein an image of a substrate being inspected (108) and a reference image of a substrate (110) are received);
extracting, by a feature extraction model processing the image, a plurality of visual features from the image (Fig. 1, element 102; Figs. 4, 5; paragraphs 30, 35, 39 and 40, wherein image subsets or patches (i.e. plurality of visual features) comprising potential defects are extracted via image processing stage/model (102) that comprises key point detection, registration, subtraction and filtering); and
identifying, by a trainable feature classifier processing the plurality of visual features extracted by the feature extraction model, a region of interest corresponding to an electronic circuit exhibiting suboptimal performance (Fig. 1, element 104; paragraphs 41, 45 and 48, wherein defect presence classifier (i.e. trainable feature classifier, for example CNN or ResNet) processes the image subset/patches extracted by the image processing stage/model (i.e. feature extraction model) to identify a region thereof corresponding to a defect in the electronic circuit that would be associated with the circuit exhibiting suboptimal performance (e.g. “short” as seen in figure 11)).
Regarding claim 14, Checka discloses the non-transitory computer readable medium of claim 13, wherein the operations further comprise: preprocessing the image (Fig. 1, elements 102, 114, 116; paragraphs 30-34, wherein key point detection and registration correspond to “preprocessing the image” before subtraction and filtering).
Regarding claim 15, Checka discloses the non-transitory computer readable medium of claim 13, wherein the operations further comprise:
determining, by a trainable defect classification model processing a subset of the plurality of visual features associated with the region of interest, a type of a defect associated with the region of interest (Fig. 1, element 106; Fig. 11, element 1110; paragraphs 59-61, wherein defect characterization stage (106) comprises a trainable defect classification model (e.g. R-CNN, YOLO, etc.) that processes the image subset/patch to determine a type, location and size of a defect).
Regarding claim 21, Checka discloses a method, comprising:
receiving, by a processing device, an image of a substrate of an electronic device (Fig. 1, elements 108 and 110; paragraph 29, wherein an image of a substrate being inspected (108) and a reference image of a substrate (110) are received);
extracting, by a feature extraction model processing the image, a plurality of visual features from the image (Fig. 1, element 102; Figs. 4, 5; paragraphs 30, 35, 39 and 40, wherein image subsets or patches (i.e. plurality of visual features) comprising potential defects are extracted via image processing stage/model (102) that comprises key point detection, registration, subtraction and filtering); and
identifying, by a trainable feature classifier processing the plurality of visual features extracted by the feature extraction model, a region of interest corresponding to an electronic circuit associated with performance of the electronic circuit (Fig. 1, element 104; paragraphs 41, 45 and 48, wherein defect presence classifier (i.e. trainable feature classifier, for example CNN or ResNet) processes the image subset/patches extracted by the image processing stage/model (i.e. feature extraction model) to identify a region thereof corresponding to a defect in the electronic circuit that would be associated with suboptimal performance (e.g. “short” as seen in figure 11) of the circuit).
Regarding claim 22, Checka discloses the system of claim 21, wherein the feature extraction model comprises AT LEAST ONE OF: Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), KAZE, Accelerated-KAZE (AKAZE), Oriented FAST and Rotated BRIEF (ORB), OR Binary Robust Invariant Scalable Keypoints (BRISK) (paragraph 31, wherein image processing stage/model (102) (i.e. feature extraction model) implements at least one of SIFT, SURF or ORB).
Regarding claim 23, Checka discloses the system of claim 21, wherein the trainable feature classifier comprises AT LEAST ONE OF: Perceptron, Naive Bayes, Decision Tree, Logistic Regression, K-Nearest Neighbor, Artificial Neural Networks, Deep Learning, OR Support Vector Machine (Fig. 1, element 104; paragraphs 41 and 45, wherein defect presence classifier (i.e. trainable feature classifier) can be implemented as a convolutional neural network which corresponds to at least an “Artificial Neural Network” or “Deep Learning”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4, 5 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0309637 to Checka et al. (“Checka”) in view of US 2020/0105500 to Chou et al. (“Chou”).
Regarding claim 4, the prior art of Checka discloses the system of claim 1.
Checka does not disclose expressly wherein the operations further comprise:
receiving a second image of the region of interest, wherein a resolution of the second image exceeds a resolution of the image of the substrate;
extracting, by a second feature extraction model processing the second image, a second plurality of visual features from the second image; and
identifying, by a second trainable feature classifier processing the second plurality of visual features, a second region of interest corresponding to an electronic circuit associated with performance of the electronic circuit within the second image, wherein the second region of interest is a part of the region of interest.
Chou discloses a process of identifying a region of interest corresponding to an electronic circuit associated with performance of the circuit (Fig. 5, elements 510-550; paragraphs 51-55, wherein a coarse region of interest correspond to the region of interest claimed) further comprising:
receiving a second image of the region of interest, wherein a resolution of the second image exceeds a resolution of the image of the substrate (Fig. 5, element 560; paragraphs 15-20, 40-48, 56, wherein the coarse region of interest is processed to obtain/receive a second image of the region at a first scale with a finer/higher resolution (i.e. a resolution that exceeds that of the coarse region of interest));
extracting, by a second feature extraction model processing the second image, a second plurality of visual features from the second image (Fig. 5, element 560; paragraphs 15-20, 40-48, 56, wherein at each scale of multiple scales feature identifiers are extracted and mapped to those of previous scales); and
identifying, by a second trainable feature classifier processing the second plurality of visual features, a second region of interest corresponding to an electronic circuit associated with performance of the electronic circuit within the second image, wherein the second region of interest is a part of the region of interest (Fig. 5, element 560; paragraphs 40-48, 56, wherein at each scale of multiple scales the feature identifiers extracted are mapped to those of previous scales using a second trainable feature classifier (e.g. CNN, SRCNN), and the result is an identified second fine resolution region of interest that corresponds to (i.e. part of) the coarse region of interest received).
Checka & Chou are combinable because they are from the same art of image processing, specifically identifying regions of interest on electronic circuits.
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to incorporate the technique of identifying a second region of interest associated with performance of an electronic circuit, having a resolution that exceeds that of an initial image of the circuit, as taught by Chou, into the process of identifying a region of interest corresponding to an electronic circuit associated with performance of the circuit disclosed by Checka.
The suggestion/motivation for doing so would have been eliminate the need to capture additional images of the wafer providing improved processing (Chou, paragraph 50, last sentence).
Therefore, it would have been obvious to combine Checka with Chou to obtain the invention as specified in claim 4.
Regarding claim 5, the combination of Checka and Chou discloses the system of claim 4, wherein the operations further comprise:
determining, by a trainable defect classification model processing a subset of the second plurality of visual features associated with the second region of interest, a type of a defect associated with the second region of interest (Checka, Fig. 1, element 106; Fig. 11, element 1110; paragraphs 59-61, wherein defect characterization stage (106) comprises a trainable defect classification model (e.g. R-CNN, YOLO, etc.) that processes the image subset/patch to determine a type, location and size of a defect. Chou, paragraphs 26, 50 and 57).
Regarding claim 16, the prior art of Checka discloses the non-transitory computer readable medium of claim 13.
Checka does not disclose expressly wherein the operations further comprise:
receiving a second image of the region of interest, wherein a resolution of the second image exceeds a resolution of the image of the semiconductor substrate;
extracting, by a second feature extraction model processing the second image, a second plurality of visual features from the second image; and
identifying, by a second trainable feature classifier processing the second plurality of visual features, a second region of interest corresponding to an electronic circuit exhibiting suboptimal performance within the second image, wherein the second region of interest is a part of the region of interest.
Chou discloses a process of identifying a region of interest corresponding to an electronic circuit associated with suboptimal performance of the circuit (Fig. 5, elements 510-550; paragraphs 51-55, wherein a coarse region of interest correspond to the region of interest claimed) further comprising:
receiving a second image of the region of interest, wherein a resolution of the second image exceeds a resolution of the image of the semiconductor substrate (Fig. 5, element 560; paragraphs 40-48, 56, wherein the coarse region of interest is processed to obtain/receive a second image of the region at a first scale with a finer/higher resolution (i.e. a resolution that exceeds that of the coarse region of interest));
extracting, by a second feature extraction model processing the second image, a second plurality of visual features from the second image (Fig. 5, element 560; paragraphs 15-20, 40-48, 56, wherein at each scale of multiple scales feature identifiers are extracted and mapped to those of previous scales); and
identifying, by a second trainable feature classifier processing the second plurality of visual features, a second region of interest corresponding to an electronic circuit exhibiting suboptimal performance within the second image, wherein the second region of interest is a part of the region of interest (Fig. 5, element 560; paragraphs 15-20, 40-48, 56, wherein at each scale of multiple scales the feature identifiers extracted are mapped to those of previous scales using a second trainable feature classifier (e.g. CNN or SRCNN), and the result is an identified second fine resolution region of interest that corresponds to (i.e. part of) the coarse region of interest received).
Checka & Chou are combinable because they are from the same art of image processing, specifically identifying regions of interest on electronic circuits.
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to incorporate the technique of identifying a second region of interest associated with performance of an electronic circuit, having a resolution that exceeds that of an initial image of the circuit, as taught by Chou, into the process of identifying a region of interest corresponding to an electronic circuit associated with performance of the circuit disclosed by Checka.
The suggestion/motivation for doing so would have been eliminate the need to capture additional images of the wafer providing improved processing (Chou, paragraph 50, last sentence).
Therefore, it would have been obvious to combine Checka with Chou to obtain the invention as specified in claim 16.
Regarding claim 17, the combination of Checka and Chou discloses non-transitory computer readable medium of claim 16, wherein the image is received from a first imaging device, and the second image is received from a second imaging device (Chou, paragraph 12, wherein the first coarse image and second fine image can be obtained using a first and second imaging device).
Regarding claim 18, the combination of Checka and Chou discloses the non-transitory computer readable medium of claim 16, wherein the feature extraction model and the second feature extraction model use different feature detectors (Checka, Fig. 1, element 102; Figs. 4, 5; paragraphs 30, 35, 39 and 40, wherein the feature extraction model corresponds to image processing stage/model (102) that comprises key point detection, registration, subtraction and filtering. Chou, Fig. 5, element 560; paragraphs 15-20, 40-48, 56, a different second feature extraction model is disclosed wherein at each scale of multiple scales feature identifiers are extracted and mapped to those of previous scales).
Regarding claim 19, the combination of Checka and Chou discloses the non-transitory computer readable medium of claim 16, wherein the trainable feature classifier and the second trainable feature classifier are trained using different training data (Checka, Fig. 1, element 104; paragraphs 41, 45 and 48, wherein defect presence classifier (i.e. trainable feature classifier) is, for example, a CNN or ResNet that uses labeled dataset from thousands of substrate images. Chou, Fig. 5, element 560; paragraphs 15-20, 40-48, 56, wherein a second trainable feature classifier (e.g. SRCNN) is used, which recursively fine feature details of region-based feature identifiers for leaning/training data).
Regarding claim 20, the combination of Checka and Chou discloses the non-transitory computer readable medium of claim 16, wherein the trainable feature classifier and the second trainable feature classifier are trained using different machine learning techniques (Checka, Fig. 1, element 104; paragraphs 41, 45 and 48, wherein defect presence classifier (i.e. trainable feature classifier) is, for example, a CNN or ResNet that uses labeled dataset from thousands of substrate images. Chou, Fig. 5, element 560; paragraphs 15-20, 40-48, 56, wherein a second trainable feature classifier (e.g. SRCNN) is used, which recursively fine feature details of region-based feature identifiers for leaning/training data and thus corresponds to a different learning technique from that of the classifier disclosed by Checka.).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON W CARTER whose telephone number is (571)272-7445. The examiner can normally be reached 8am - 5pm (Mon - Fri).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON W CARTER/Primary Examiner, Art Unit 2661