Prosecution Insights
Last updated: April 19, 2026
Application No. 18/203,811

METHOD FOR INSPECTING DEFECTS OF PRODUCT BY USING 2D IMAGE INFORMATION

Final Rejection §103
Filed
May 31, 2023
Examiner
JAMES, DOMINIQUE NICOLE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
VAZIL Company Co., Ltd.
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
16 granted / 21 resolved
+14.2% vs TC avg
Strong +38% interview lift
Without
With
+38.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status This action is in response to the application filed on December 03, 2025. Claims 1, 11, and 12 are amended. Thus, claims 1-12 are pending for examination in this application. Priority Receipt is acknowledged that application claims priority to foreign application(s) with application number(s) KR10-2023-0052216 with a priority date of April 20, 2023, KR10-2023-0037941 with a priority date of March 23, 2023, KR10-2023-0037942 with a priority date of March 23, 2023, and KR10-2022-0152776 with a priority date of November 15, 2022. Copies of certified papers required by 37 CFR 1.55 have been received. Response to Amendments Applicant’s remarks and amendments filed December 05, 2025, have been entered. Response to Arguments Applicant’s arguments filed December 05, 2025, regarding the rejection(s) of claim(s) 1-12 have been fully and completely considered but are moot because the arguments do not apply to the new combination of the references, facilitated by Applicant’s newly submitted amendments, including new prior art— Helwegen et al, US 20220405576 and Kim et al, US 20240144653—being used in the current rejection. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-5, 7-9, and 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al, US 20210374940 in view of Adrian et al, US 20220180189 in view of Helwegen et al, US 20220405576 in further view of Kim et al, US 20240144653. Regarding claim 1, Liu teaches a method for (see Liu, Paragraph [0059], “detecting a product defect and a defect type existing in the product by using the segmentation networks, the concatenating network and the classification network”), the method performed by a computing device, the method comprising: obtaining one or more images (see Liu, Paragraph [0052], “an image of flawed products obtained in the early stage of the production line are acquired”); inputting the obtained image into a neural network model, and generating a (see Liu, Paragraph [0066], “the small-scale feature map generated by the convolution part to the original image size”); identifying the presence or absence of the defect (see Liu, Paragraph [0059], “detecting a product defect and a defect type existing in the product”) Liu does not expressively teach inputting the obtained image into a neural network model, and generating a plurality of feature maps; extracting a first feature vector based on the plurality of feature maps; extracting a second feature vector for identifying a feature map related to the presence or absence of the defect among the plurality of feature maps based on the extracted first feature vector; extracting a third feature vector for identifying a feature map region related to the presence or absence of the defect based on the plurality of feature maps; and predicting the presence or absence of the defect based on the first feature vector, the second feature vector, and the third feature vector by using the neural network model. However, Adrian in a similar invention in the same field of endeavor teaches inputting the obtained image into a neural network model, and generating a plurality of feature maps (see Adrian, Paragraph [0018], “The production of a multiplicity of feature maps by the neural network for the image”); extracting a first feature vector based on the plurality of feature maps (see Adrian, Paragraph [0018], “supplying the first subset of feature maps of all images assigned to the respective time to the first encoder for the production of the first feature vector for the images assigned to the time”); extracting a second feature vector for identifying a feature map related to the presence or absence of the defect among the plurality of feature (see Adrian, Paragraph [0018], “supplying the second subset of feature maps of all images assigned to the respective time to the second encoder for the production of the second feature vector for the images assigned to the time”); extracting a third feature vector for identifying a feature map region related to the presence or absence of the defect based on the plurality of feature maps (see Adrian, Paragraph [0027], “supplying the third subset of the image to the third encoder for the production of a third feature vector to which the respective perspective of the image sequence assigned to the image is assigned”); and predicting the presence or absence of the defect based on the first feature vector, the second feature vector, and the third feature vector by using the neural network model (see Adrian, Paragraph [0031], “supplying the first feature vector, the second feature vector, and the third feature vector to the decoder for the production of a predicted target image from the perspective supplied to the third feature vector,” a predicted target is considered to be predicting the presence or absence of the defect). The combination of Liu and Adrian are analogous art because they are both in the same field of endeavor of detecting defects/anomalies. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to generate a plurality of feature maps; extract first, second, and third feature vectors based on the plurality of feature maps; predict the presence or absence of the defect based on the first, second, and third feature vector; extracting features of the image using a plurality of first convolution layers and a plurality of first pooling layers; performing additional pooling; inputting into one or more second convolution layers; performing additional pooling; generate a first synthesized vector by concatenating the second feature vector and the third feature vector; generating a final synthesized vector by concatenating the first feature vector and the first synthesized vector using in the method of Adrian in the method of Liu for the reduction of the error value (see Adrian, Paragraph [0027]). Liu in view of Adrian does not expressively teach performing a global average pooling operation to remove spatial information from the first feature vector; However, Helwegen in a similar invention in the same field of endeavor teaches performing a global average pooling operation to remove spatial information from the first feature vector (see Helwegen, Paragraph [0227], “The connections are globally pooled, i.e. spatial information is removed, through the use of global average pooling.”); The combination of Liu, Adrian, and Helwegen are analogous art because they are all in the same field of endeavor of detecting abnormalities/objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use global average pooling to remove spatial information as taught in the method of Helwegen in the method of Liu in view of Adrian to allow channel wise information to be obtained (see Helwegen Paragraph [0234]). Liu in view of Adrian in view of Helwegen does not expressively teach extracting a second feature vector for identifying a feature map related to the presence or absence of the defect among the plurality of feature maps based on at least an importance level of one or more channels of the extracted first feature vector However, Kim in a similar invention in the same field of endeavor teaches extracting a second feature vector for identifying a feature map related to the presence or absence of the defect among the plurality of feature maps based on at least an importance level of one or more channels of the extracted first feature vector (see Kim Paragraph [0164], “output the second feature vector based on the average of the elements of the first feature vector and may output the second feature map 250 based on the average of the channels of the first feature map 240,” the average of the elements of the first feature vector and average of the channels of the first feature map are considered to be an importance level of one or more channels of the extracted first feature vector); The combination of Liu, Adrian, Helwegen, and Kim are analogous art because they are all in the same field of endeavor of utilizing feature maps. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to output the second feature vector based on the average of the elements of the first feature vector and average of the channels of the first feature map as taught in the method of Kim in the method of Liu in view of Adrian in view of Helwegen to perform a predetermined training task with high accuracy (see Kim Paragraph [0006]). Regarding claim 2, Liu in view of Adrian in view Helwegen in further view of Kim further teaches the method of claim 1, wherein the one or more images include one or more 2D images, and the obtaining of the one or more images includes obtaining one or more images of a front image, a back image, a right image, a left image, a top image, or a bottom image of the product (see Adrian, Paragraph [0121], “The first image sequence I.sup.(1) can show a scene from a first perspective (e.g. from above), the second image sequence I.sup.(2) can show the same scene from a second perspective (e.g. from a second angle of view, e.g. from the front), and the third image sequence I.sup.(3) can show the same scene from a third perspective (e.g. an oblique top view).”). The rationale of claim 1 has been applied herein. Regarding claim 4, Liu in view of Adrian in view Helwegen in further view of Kim further teaches the method of claim 1, wherein the inputting of the obtained image into the neural network model, and the generating of the plurality of feature maps includes: extracting features of the image by using a plurality of first convolution layers and a plurality of first pooling layers included in the neural network model, and generating the plurality of feature maps based on the extracted features (see Adrian, Paragraph [0021], “The supplying of the first subset of feature maps of all images assigned to the respective time to the first encoder for the production of the first feature vector for the images assigned to the time can include: production, through application of a pooling method (e.g. max pooling, e.g. mean value pooling) to the first subset of feature maps, of a first set of pooling feature maps, each pooling feature map of the first set of pooling feature maps being assigned to a feature map of each first subset of feature maps”). The rationale of claim 1 has been applied herein. Regarding claim 5, Liu in view of Adrian in view Helwegen in further view of Kim further teaches the method of claim 1, wherein the extracting of the first feature vector based on the plurality of feature maps includes: extracting the first feature vector by performing additional pooling on the plurality of feature maps (see Adrian, Paragraph [0021], “The supplying of the first subset of feature maps of all images assigned to the respective time to the first encoder for the production of the first feature vector for the images assigned to the time can include: production, through application of a pooling method (e.g. max pooling, e.g. mean value pooling) to the first subset of feature maps, of a first set of pooling feature maps,” two pooling methods are listed to extract first feature vector therefore, additional pooling can be performed). The rationale of claim 1 has been applied herein. Regarding claim 7, Liu in view of Adrian in view Helwegen in further view of Kim further teaches the method of claim 1, wherein the extracting of the third feature vector for identifying the feature map region related to the presence or absence of the defect based on the plurality of feature maps includes: inputting the plurality of feature maps into one or more second convolution layers, and applying a convolution operation, and performing additional pooling for the feature maps to which the convolution operation is applied, and extracting the third feature vector for identifying the feature map region related to the presence or absence of the defect (see Adrian, Paragraph [0027], “supplying the third subset of the image to the third encoder for the production of a third feature vector to which the respective perspective of the image sequence assigned to the image is assigned,” and Paragraph [0032], “for each image sequence of the multiplicity of image sequences, production of at least one third feature vector using a respective image of the image sequence. The features described in this paragraph, in combination with one or more of the eighth example through the seventeenth example, form an eighteenth example embodiment of the present invention,” third feature vector is extracted the same as the first and second feature vector). The rationale of claim 1 has been applied herein. Regarding claim 8, Liu in view of Adrian in view Helwegen in further view of Kim further teaches the method of any one of claim 5 to claim 7, wherein the additional pooling includes global average pooling (see Liu, Paragraph [0072], “and a pooling layer connected to a final residual unit is a global average pooling layer;”). The rationale of claim 5 to 7 has been applied herein. Regarding claim 9, Liu in view of Adrian in view Helwegen in further view of Kim further teaches the method of claim 1, wherein the predicting the presence or absence of the defect based on the first feature vector, the second feature vector, and the third feature vector by using the neural network model includes: generating a first synthesized vector by concatenating the second feature vector and the third feature vector, generating a final synthesized vector by concatenating the first synthesized vector and the first feature vector, and predicting the presence or absence of the defect based on the final synthesized vector (see Adrian, Paragraph [0106], “decoder 120 can be set up to process a concatenation of first feature vector 116, second feature vector 118, and third feature vector 122, and, for this concatenation, to produce the predicted target image 106.”). The rationale of claim 1 has been applied herein. As per Claim 11, Claim 11 claims a computer program stored in a non-transitory computer-readable storage medium, wherein the computer program allows one or more processors to perform operations for predicting presence or absence of a defect of a product when the computer program is executed by one or more processors, the operations comprising the same limitations as claimed in Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1. Liu further teaches a non-transitory computer-readable storage medium, wherein the computer program allows one or more processors to perform operations for predicting presence or absence of a defect of a product when the computer program is executed by one or more processors (see Liu, Paragraph [0039], “The processor 2100 may be a mobile version processor. The memory 2200 includes, for example, ROM (Read Only Memory), RAM (Random Access Memory), nonvolatile memory such as a hard disk, etc.”), As per Claim 12, Claim 12 claims a computing device comprising: at least one processor; and a memory, wherein the at least one processor is configured to do the same limitations as claimed in Claim 1. Therefore the rejection and rationale are analogous to that made in Claim 1. Liu further teaches at least one processor; and a memory, wherein the at least one processor is configured to (see Liu, Paragraphs [0040], “In the present embodiment, the memory 2200 of the product defect detection device 2000 is configured to store instructions for controlling the processor 2100 to operate to at least execute the product defect detection method”): Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al, US 20210374940 in view of Adrian et al, US 20220180189 in view of Helwegen et al, US 20220405576 in view of Kim et al, US 20240144653 in further view of Wei et al, CN 113808104. Regarding claim 3, Liu in view of Adrian in view Helwegen in further view of Kim does not expressively teach the method of claim 1, wherein the inputting of the obtained image into the neural network model, and the generating of the plurality of feature maps includes: performing gamma correction for the obtained image, and inputting the gamma-corrected image into the neural network model. However, Wei in a similar invention in the same field of endeavor teaches wherein the inputting of the obtained image into the neural network model, and the generating of the plurality of feature maps includes: performing gamma correction for the obtained image, and inputting the gamma-corrected image into the neural network model (see Wei, Paragraph [0019], “Gamma correction is used to process and correct the metal surface defect sample images”). The combination of Liu, Adrian, Helwegen, Kim, and Wei are analogous art because they are all in the same field of endeavor of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to perform gamma correction for the obtained image in the method of Wei in the method of Liu in view of Adrian in view of Helwegen in further view of Kim to determine whether to improve the efficiency of detecting and locating the defect of the metal surface, and the recall rate of the detection is high (see Wei Abstract). Claim(s) 6 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al, US 20210374940 in view of Adrian et al, US 20220180189 in view of Helwegen et al, US 20220405576 in view of Kim et al, US 20240144653 in further view of Chen et al, WO 2022218068. Regarding claim 6, Liu in view of Adrian in view Helwegen in further view of Kim does not expressively teach the method of claim 1, However, Chen in a similar invention in the same field of endeavor teaches wherein the extracting of the second feature vector for identifying the feature map related to the presence or absence of the defect among the plurality of feature maps based on the extracted first feature vector includes: inputting the first feature vector into one or more first fully-connected layers, and extracting the second feature vector for identifying a feature map related to the presence or absence of the defect among the plurality of feature maps (see Chen, Paragraph [0116], “: input the above-mentioned first vector into the target fully-connected network in the pre-trained above-mentioned fully-connected network set to obtain a second vector;”). The combination of Liu, Adrian, Helwegen, Kim, and Chen are analogous art because they are all in the same field of endeavor of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extract a second feature vector based on the extracted first feature vector; inputting the first feature vector into one or more first fully-connected layers and extracting the second feature vector; inputting the final synthesized vector into one or more second fully connected layers in the method of Chen in the method of Liu in view of Adrian in view of Helwegen in further view of Kim to determine whether to deliver the target material again (see Chen Abstract). Regarding claim 10, Liu in view of Adrian in view Helwegen in further view of Kim does not expressively teach the method of claim 9, However, Chen in a similar invention in the same field of endeavor teaches wherein the predicting the presence or absence of the defect based on the final synthesized vector includes: inputting the final synthesized vector into one or more second fully-connected layers and predicting the presence or absence of the defect (see Chen, Paragraph [0111], “the output result of the long short-term memory network 510 is input into the second fully connected layer 515 to obtain the output result of the second fully connected layer 515.”). The combination of Liu, Adrian, Helwegen, Kim, and Chen are analogous art because they are all in the same field of endeavor of image processing. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to extract a second feature vector based on the extracted first feature vector; inputting the first feature vector into one or more first fully-connected layers and extracting the second feature vector; inputting the final synthesized vector into one or more second fully connected layers in the method of Chen in the method of Liu in view of Adrian in view of Helwegen in further view of Kim to determine whether to deliver the target material again (see Chen Abstract). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOMINIQUE JAMES/Examiner, Art Unit 2666 /MING Y HON/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
Sep 03, 2025
Non-Final Rejection — §103
Dec 05, 2025
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591976
CELL SEGMENTATION IMAGE PROCESSING METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12567138
REGISTRATION METROLOGY TOOL USING DARKFIELD AND PHASE CONTRAST IMAGING
2y 5m to grant Granted Mar 03, 2026
Patent 12548159
SCENE PERCEPTION SYSTEMS AND METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12462681
Detection of Malfunctions of the Switching State Detection of Light Signal Systems
2y 5m to grant Granted Nov 04, 2025
Patent 12462346
MACHINE LEARNING BASED NOISE REDUCTION CIRCUIT
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+38.5%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month