Prosecution Insights
Last updated: April 19, 2026
Application No. 18/589,658

SEMICONDUCTOR IMAGE PROCESSING APPARATUS AND SEMICONDUCTOR IMAGE PROCESSING METHOD

Non-Final OA §102§103§112
Filed
Feb 28, 2024
Examiner
KRAYNAK, JACK PETER
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Kioxia Corporation
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
75 granted / 96 resolved
+16.1% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
18 currently pending
Career history
114
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
27.3%
-12.7% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 96 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 and 18: The limitations in claim 1 and 18 state: identify a label corresponding to a feature amount included in an input image by using an identifier; learn a model for inferring the feature amount included in the input image and learns the identifier. Claims 2-17 and 19-20 are rejected based on dependency. It is unclear to the examiner how the identifier is used to identify a label correspond to a feature amount, and then is learned after it is used? The examiner has interpreted the limitations to teach recognizing a label corresponding to a feature amount, and training the model to identify the feature amount in the input image to learn an identifier. Claim 2 and 19 - The limitations of claim 2 and 9 state "wherein the input image includes a simulated image having a known true answer label and a real image having an unknown label." This is unclear, as it is unclear if the input image is referring to the simulated image or the real image, as the 'input image' is singular. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 and 18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kondo et al (US 20190139210 A1). Regarding claim 1, Kondo et al teaches, a semiconductor image processing apparatus comprising a processing circuitry, the processing circuitry configured to (Para 1, the present invention relates to a defect classification apparatus classifying various types of defects generated in a manufacturing line of semiconductor wafers and, more specifically, to a defect classification apparatus and a defect classification method including a method and a unit for processing images captured by an image-capturing apparatus and learning a classifier by using the captured images and the processed images. i.e. a semiconductor image processing apparatus for determining an semiconductor defect): identify a label corresponding to a feature amount included in an input image by using an identifier (Fig 9 S903-S904, instruct defect class of capture image and give parameter, and Para 95, the subsequent processes S903 to S907 are performed to the images of the plurality of channels obtained by capturing the image of the same place. First, the defect classes of the images of the plurality of channels stored in the image storage unit 109 as the captured images are instructed and parameters for processing the images are provided (S903). The instructed defect classes are stored in the defect class storage unit 110 (S904), and the images of the plurality of channels are processed by the image processing unit 112 (S905). i.e. identifying a label corresponding to a feature amount included in an input image by using an identifier is instructing a defect class of an input image (capture image), and the defect classes are stored in the defect class storage unit); learn a model for inferring the feature amount included in the input image and learns the identifier (Fig 9 S905-S907, and Para 96, the images of the plurality of channels processed by the image processing unit 112 are stored in the image storage unit 109 as the processed images based on the channel information accompanying the images respectively (S906), and the defect classes of the processed images of the plurality of channels are stored in the defect class storage unit 110 (S907). i.e. learning an identifier for inferring the feature amount included in the input image, furthermore the stored defect classes and processed images are used to train the model in S908-S909); and perform additional learning of the model based on the input image and the learned identifier (Fig 9 S908-S909 and Para 97, the classifier for classifying the defect classes of the images is learned by the classifier learning unit 113 using the captured images and the processed images stored in the image storage unit 109, and the defect classes stored in the defect class storage unit 110 (S908) and the captured images stored in the image storage unit 109 are classified by the image classification unit 114 using the learned classifier (S909). i.e. performing additional learning of the model based on the captured image (can be considered input image) and the defect classes (learned identifier)). Regarding claim 18, claim 18 rejected for the same reasons as claim 1 above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 2-4, 10-11, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al (US 20190139210 A1) in view of Lee et al (US 20240273374 A1). Regarding claim 2, Kondo et al teaches the semiconductor image processing apparatus according to claim 1, wherein the input image includes a simulated image having a known true answer label and a real image having an unknown label (Fig 9 and Fig 11, i.e. see Para 120 and the processing flow is the same as FIG. 9. The difference is the processing method of the image processing process (S905), but the image processing step of Fig 9 S905. A simulated image with a known true answer label is included in the input image to the algorithm of Fig 9 which is stated in: Para 122, can also be seen in Fig 11: a defect portion is extracted from each of the captured images to be processed (S1101); the image, from which the defect portion is extracted, is subject to deformation such as extension or contraction or an aspect change or the like (S1102); the image obtained by the deformation process is synthesized with the images of the plurality of channels different from the images to be processed per channel (S1103); and the defect class of the image obtained by the synthesis process is instructed (S1104)). Regarding the limitations: and the processing circuitry is configured to learn the model based on the true answer label and an inference image inferred by inputting the simulated image to the model, and learn the identifier based on a result of comparing the label identified by inputting the inference image to the identifier to the true answer label, Konda et al teaches the simulated image and the known true answer label are inputted into the model (Fig 9 and Fig 11, S906-S907, following the flowchart of Fig 11), but does not precisely teach and the processing circuitry is configured to learn the model based on the true answer label and an inference image inferred by inputting the simulated image to the model, and learn the identifier based on a result of comparing the label identified by inputting the inference image to the identifier to the true answer label. In a similar field of endeavor, Lee et al teaches, and the processing circuitry is configured to learn the model based on the true answer label and an inference image inferred by inputting the simulated image to the model, and learn the identifier based on a result of comparing the label identified by inputting the inference image to the identifier to the true answer label (Fig 4 and Para 50-61, the self-supervised pre-training module 102 receives the unlabeled real images. The unlabeled real images comprise one or more real images with defects and one or more real images without any defect. In process 403, the self-supervised pre-training module 102 generates synthetic defect images by using the detected anomalies in the unlabeled real images. A detected anomaly region is overlaid on a real non-defect image to generate a synthetic defect image. i.e. the inference image is the synthetic defect image that is generated in process 403, wherein a detected anomaly region is overlaid on a real non-defect image to generate a synthetic defect image. This can be considered an inference image as the model is inferring that the generated synthetic defect image correctly has a defect based on the anomaly detection (identifying the defect) of the model in 402 (also see Fig 1 101 and Fig 2). The synthetic defect image is therefore compared to the true answer label during the contrastive learning task, with the goal of learning the identifier (classification) (Para 57, the contrastive learning task aims to separate the defect class from non-defect class further by learning an embedding space in which similar pairs (defect pairs or non-defect pairs) are close to each other, while dissimilar pairs (defect and non-defect pairs) are further apart from each other. Also see anomaly learning module 101 in Fig 1-2 and Para 30-43). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Kondo et al (US 20190139210 A1) in view of Lee et al (US 20240273374 A1) so that the processing circuitry is configured to learn the model based on the true answer label and an inference image inferred by inputting the simulated image to the model. Doing so would improve the performance of downstream tasks such as the multiclass defect classification (Para 57). Regarding claim 3, Kondo et al teaches the semiconductor image processing apparatus according to claim 2, wherein the processing circuitry is configured to start to learn the identifier after the learning of the model is ended, or while learning the model (Para 97, the classifier for classifying the defect classes of the images is learned by the classifier learning unit 113 using the captured images and the processed images stored in the image storage unit 109, and the defect classes stored in the defect class storage unit 110 (S908) and the captured images stored in the image storage unit 109 are classified by the image classification unit 114 using the learned classifier (S909). i.e. the identifier and the learning of the model occurs simultaneously (classifier for classifying the defect classes is learned using the captured images, processed images, and defect classes stored in the defect class storage unit)). Regarding claim 4, Kondo et al teaches the semiconductor image processing apparatus according to claim 2, wherein the model is configured to classify the input image for each region including the feature amount, and give the label that is different for each type of the feature amount (Fig 9 and 11, Para 93-Para 97, model classifies input image for each region including feature amount (defect) and can give label that is different for each type of feature amount (multiple classes)). Regarding claim 10, Kondo et al teaches the semiconductor image processing apparatus according to claim 2, wherein the input image is an image of a surface of a wafer on which a semiconductor device is formed, and the feature amount includes a defect of the semiconductor device (Para 1, the present invention relates to a defect classification apparatus classifying various types of defects generated in a manufacturing line of semiconductor wafers and, more specifically, to a defect classification apparatus and a defect classification method including a method and a unit for processing images captured by an image-capturing apparatus and learning a classifier by using the captured images and the processed images. i.e. a semiconductor image processing apparatus for determining an semiconductor defect). Regarding claim 11, Kondo et al teaches the semiconductor image processing apparatus according to claim 10, wherein the defect of the semiconductor device includes at least one defect of a circular shape, a linear shape, a wiring, or a hole (Fig 19A-19D shows defect, can be considered linear shape). Regarding claim 19, claim 19 rejected for the same reasons as claim 2 in the combination above. Claim(s) 7-8, and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al (US 20190139210 A1) in view of Lee et al (US 20240273374 A1) and Harada et al (US 20230052350 A1). Regarding claim 7, Kondo and Lee et al do not teach, the semiconductor image processing apparatus according to claim 2, wherein the processing circuitry is further configured to: extract a region including the feature amount included in the real image based on the additionally learned model, and perform image processing according to the extracted region to generate a feature emphasis-processed image. In a similar field of endeavor, Harada et al teaches, the semiconductor image processing apparatus according to claim 2, wherein the processing circuitry is further configured to: extract a region including the feature amount included in the real image based on the additionally learned model, and perform image processing according to the extracted region to generate a feature emphasis-processed image (Fig 5 504 (expanded explanation in Fig 10) and Para 70-72 and see Fig 11 Para 113 - In Step S1101, a high magnification image is estimated from the defect candidate site (region) through the image quality enhancement process. At this time, the high magnification image is estimated such that the center of the defect candidate region is the center of the estimated high magnification image. Since the image quality enhancement process is described above in Step S502, the details thereof will not be repeated. As the image quality enhancement process parameter used in Step S1101, the image quality enhancement process parameter adjusted in Step S502 is used. That is, the corresponding high magnification image can be automatically estimated based on the defect candidate site as an input using the image quality enhancement process parameter of the neural network that is learned in advance. As a result, the high magnification image that is more accurately estimated can be acquired, and the defect discrimination can be executed with higher accuracy. i.e. based on the additionally learned model (Fig 5, process parameter automatic adjustment using inspection image is the additionally learned model that follows a parameter learning using generated pseudo-training image pairs) a feature amount (defect candidate site) is extracted and is processed to become considered 'high magnification' (image processing occurs to the extracted region to generate a feature emphasis-processed image)). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Kondo et al (US 20190139210 A1) in view of Lee et al (US 20240273374 A1) and Harada et al (US 20230052350 A1) so that the processing circuitry is further configured to: extract a region including the feature amount included in the real image based on the additionally learned model, and perform image processing according to the extracted region to generate a feature emphasis-processed image. Doing so would allow the system to provide a defect inspecting system and a defect inspecting method in which a process parameter relating to detection of a defect can be fully automatically adjusted (Harada et al, Para 17). Regarding claim 8, Kondo and Lee et al do not teach, the semiconductor image processing apparatus according to claim 2, wherein the processing circuitry is further configured to: generate a background difference image obtained by removing a background pattern from the real image. In a similar field of endeavor, Harada et al teaches, the semiconductor image processing apparatus according to claim 2, wherein the processing circuitry is further configured to: generate a background difference image obtained by removing a background pattern from the real image (Fig 12 and Para 110, images of the defect candidate sites 1202 to 1204 are cut, the image quality enhancement process is executed on the cut defect candidate sites 1202 to 1204 to estimate high magnification images, and the actual defect site is discriminated using the estimated high magnification images. In this case, in the image quality enhancement process, the corresponding high magnification image is estimated from the defect candidate site using the image quality enhancement process parameter adjusted in the image quality enhancement process parameter adjustment step S502. As a result, a defect can be discriminated with high accuracy using an image having a high SNR. i.e. a background difference image can be considered a segmented real image determining the difference between the background pattern (removed) and only leaving the defect 'foreground' as can be seen in Fig 12 1202 versus Fig 12 1205 a or b). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Kondo et al (US 20190139210 A1) in view of Lee et al (US 20240273374 A1) and Harada et al (US 20230052350 A1) so that the processing circuitry is further configured to: generate a background difference image obtained by removing a background pattern from the real image. Doing so would allow the system to provide a defect inspecting system and a defect inspecting method in which a process parameter relating to detection of a defect can be fully automatically adjusted (Harada et al, Para 17). Regarding claim 12, Kondo and Lee et al do not teach, the semiconductor image processing apparatus according to claim 10, wherein the processing circuitry is further configured to: specify a pattern of the defect, and randomly select each of a length, a color, and a position of the defect, generate a defect pattern image, and generate the simulated image by combining a background pattern image and the generated defect pattern image. In a similar field of endeavor, Harada et al teaches, the semiconductor image processing apparatus according to claim 10, wherein the processing circuitry is further configured to: specify a pattern of the defect, and randomly select each of a length, a color, and a position of the defect, generate a defect pattern image, and generate the simulated image by combining a background pattern image and the generated defect pattern image (Fig 5 503, Fig 9, and Para 94-96, generating a pseudo defect image includes specifying a pattern, length, color, and position of defect, generating a pattern image, and generating a simulated image by combining a background pattern image and generated defect pattern image). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Kondo et al (US 20190139210 A1) in view of Lee et al (US 20240273374 A1) and Harada et al (US 20230052350 A1) so that the processing circuitry is further configured to: specify a pattern of the defect, and randomly select each of a length, a color, and a position of the defect, generate a defect pattern image, and generate the simulated image by combining a background pattern image and the generated defect pattern image. Doing so would allow the system to provide a defect inspecting system and a defect inspecting method in which a process parameter relating to detection of a defect can be fully automatically adjusted (Harada et al, Para 17). Claim(s) 13-14, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kondo et al (US 20190139210 A1) in view of Harada et al (US 20230052350 A1). Regarding claim 13, Kondo et al does not teach, the semiconductor image processing apparatus according to claim 1, wherein the processing circuitry is configured to: generate a background difference image obtained by removing a background pattern from the input image; extract the feature amount from a feature emphasis-processed image generated by performing image processing according to a region including a feature amount extracted by the additionally learned model and a region including a feature amount included in the background difference image; and perform clustering of the input image based on a feature amount included in the feature emphasis-processed image. In a similar field of endeavor, Harada et al teaches, the semiconductor image processing apparatus according to claim 1, wherein the processing circuitry is configured to: generate a background difference image obtained by removing a background pattern from the input image (Fig 11-12 and Para 110, images of the defect candidate sites 1202 to 1204 are cut, the image quality enhancement process is executed on the cut defect candidate sites 1202 to 1204 to estimate high magnification images, and the actual defect site is discriminated using the estimated high magnification images. In this case, in the image quality enhancement process, the corresponding high magnification image is estimated from the defect candidate site using the image quality enhancement process parameter adjusted in the image quality enhancement process parameter adjustment step S502. As a result, a defect can be discriminated with high accuracy using an image having a high SNR. i.e. a background difference image can be considered a segmented real image determining the difference between the background pattern (removed) and only leaving the defect 'foreground' as can be seen in Fig 12 1202 versus Fig 12 1205 a or b, or the estimated high magnification image); extract the feature amount from a feature emphasis-processed image generated by performing image processing according to a region including a feature amount extracted by the additionally learned model and a region including a feature amount included in the background difference image (Fig 5 504 and Fig 11 and Fig 12. i.e. feature amount (defect candidate in both 503/504, or S1101-S1102 in Fig 11) is extracted (output candidate sites determined in S1104), by performing an image processing according to a region including a feature amount extracted by the additionally learned model and a region including a feature amount included in the background difference image (estimating high magnification image for either S1101 or S1102). The image is processed according to the region including a feature amount extracted by the additionally learned model Fig 5 504 (defect candidates) and feature amount included in background difference image (can be considered background of reference image or inspection image in Fig 11 S1102 or S1102)); and perform clustering of the input image based on a feature amount included in the feature emphasis-processed image (Fig 5 503/504 and Para 70-71, the process parameter adjusted in Step S503 is set as an initial value, the process parameter is adjusted using this initial value in Step S504, and thus the final process parameter is acquired. Next, the process parameter automatic adjustment process ends in Step S500_E. The final process parameter acquired in Step S504 is used in the defect observation process on another semiconductor wafer. i.e. the input image (pseudo-generated data in this case, or the inspection image in the repeated process in step 504) is clustered based on the feature amount in the feature emphasis-processed image - see Fig 11 S1104 where defect candidate side having highest degree of abnormality is output). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Kondo et al (US 20190139210 A1) in view of Harada et al (US 20230052350 A1) so that the processing circuitry is configured to: generate a background difference image obtained by removing a background pattern from the input image; extract the feature amount from a feature emphasis-processed image; and perform clustering. Doing so would allow the system to provide a defect inspecting system and a defect inspecting method in which a process parameter relating to detection of a defect can be fully automatically adjusted (Harada et al, Para 17). Regarding claim 14, Kondo et al does not teach, the semiconductor image processing apparatus according to claim 13, wherein the processing circuitry is further configured to: combine the feature amount included in the input image and the feature amount extracted from the feature emphasis-processed image, and perform the clustering of the input image based on the obtained feature amount. In a similar field of endeavor, Harada et al teaches, the semiconductor image processing apparatus according to claim 13, wherein the processing circuitry is further configured to: combine the feature amount included in the input image and the feature amount extracted from the feature emphasis-processed image, and perform the clustering of the input image based on the obtained feature amount (Fig 11 steps 1100-1104, i.e. the defect candidate according to the inspection image (feature amount included in instant image) and defect candidate according to reference image (feature amount in emphasis-processed, or pseudo image) are combined in step S1103 and S110 0_RE repetition. This is then used to determine defect candidate site having highest degree of abnormality as actual defect site (performing clustering of the input image based on the obtained feature amount)). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Kondo et al (US 20190139210 A1) in view of Harada et al (US 20230052350 A1) so that the processing circuitry is further configured to: combine the feature amount included in the input image and the feature amount extracted from the feature emphasis-processed image, and perform the clustering of the input image based on the obtained feature amount. Doing so would allow the system to provide a defect inspecting system and a defect inspecting method in which a process parameter relating to detection of a defect can be fully automatically adjusted (Harada et al, Para 17). Regarding claim 20, claim 20 rejected for the same reasons as claim 13 in the combination above. Allowable Subject Matter Claims 5-6, 9, and 15-17 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 5-6, 9, and 15-17 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and added to the independent claims including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter in claim 5: the primary reference Kondo et al other cited prior art fail to teach or make obvious the semiconductor image processing apparatus according to claim 2, wherein the additional learning includes calculating a first loss function value based on a label corresponding to a first inference image inferred by inputting the simulated image to the model and the true answer label of the simulated image, calculating a second loss function value based on the label predicted by inputting a second inference image inferred by inputting the real image to the model to the identifier, and updating a parameter of the model based on a third loss function value obtained by adding the first loss function value and the second loss function value. No prior art alone or in combination anticipates the limitations of claim 5, claim 6 is objected due to dependency. The following is a statement of reasons for the indication of allowable subject matter in claim 9: primary reference Konda et al other cited prior art fail to teach or make obvious the semiconductor image processing apparatus according to claim 8, wherein the processing circuitry is further configured to: calculate an average pixel value for each first pixel region in a first direction of the real image, generate a first simulated background image based on the average pixel value for each first pixel region in the first direction of the real image, generate a second simulated background image based on a pixel value obtained by subtracting, for each pixel, the average pixel value of the real image from a pixel value obtained by adding, for each pixel, the average pixel value for each first pixel region in the first direction of the real image and an average pixel value for each second pixel region in a second direction intersecting the first direction of the real image, and generate the background difference image by a difference between the real image and the first simulated background image or a difference between the real image and the second simulated background image. No prior art alone or in combination anticipates the limitations of claim 9, claim 15 contains corresponding subject matter, and claim 16-17 are objected due to dependency. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: H. Kahng and S. B. Kim, "Self-Supervised Representation Learning for Wafer Bin Map Defect Pattern Classification," in IEEE Transactions on Semiconductor Manufacturing, vol. 34, no. 1, pp. 74-86, Feb. 2021, doi: 10.1109/TSM.2020.3038165 (Year: 2021) Yu, J., & Lu, X. (2016). Wafer Map Defect Detection and Recognition Using Joint Local and Nonlocal Linear Discriminant Analysis. IEEE Transactions on Semiconductor Manufacturing, 29, 33-43. (Year: 2016) T. Nakazawa and D. V. Kulkarni, "Anomaly Detection and Segmentation for Wafer Defect Patterns Using Deep Convolutional Encoder–Decoder Neural Network Architectures in Semiconductor Manufacturing," in IEEE Transactions on Semiconductor Manufacturing, vol. 32, no. 2, pp. 250-256, May 2019 (Year: 2019) US 9311697 B2 US 20240078659 A1 US 20230267599 A1 US 20220343479 A1 US 20220044391 A1 US 20210150699 A1 US 20200334800 A1 US 11776108 B2 US 11062458 B2 Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK PETER KRAYNAK whose telephone number is (703)756-1713. The examiner can normally be reached Monday - Friday 7:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK PETER KRAYNAK/Examiner, Art Unit 2668 /UTPAL D SHAH/Primary Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Feb 28, 2024
Application Filed
Jan 26, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602819
IMAGE PROCESSING APPARATUS, FEATURE MAP GENERATING APPARATUS, LEARNING MODEL GENERATION APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592065
SYSTEMS AND METHODS FOR OBJECT DETECTION IN EXTREME LOW-LIGHT CONDITIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12586210
BIDIRECTIONAL OPTICAL FLOW ESTIMATION METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12579720
METHOD OF GENERATING TRAINED MODEL, MACHINE LEARNING SYSTEM, PROGRAM, AND MEDICAL IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12568314
IMAGE SIGNAL PROCESSOR, METHOD OF OPERATING THE IMAGE SIGNAL PROCESSOR, AND APPLICATION PROCESSOR INCLUDING THE IMAGE SIGNAL PROCESSOR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
97%
With Interview (+18.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 96 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month