Prosecution Insights
Last updated: April 19, 2026
Application No. 18/330,381

DEFECT DETECTING APPARATUS AND METHOD

Non-Final OA §103§112
Filed
Jun 07, 2023
Examiner
SHUI, MING
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Foxconn Technology Group Co. Ltd.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
186 granted / 321 resolved
-4.1% vs TC avg
Strong +50% interview lift
Without
With
+50.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
23 currently pending
Career history
344
Total Applications
across all art units

Statute-Specific Performance

§101
30.8%
-9.2% vs TC avg
§103
30.5%
-9.5% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
16.9%
-23.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 321 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/22/26 has been entered. Response to Arguments Applicant argues that the claim language of “training the training model based on an encoder loss function, a contextual loss function, and an adversarial loss function after normalization” is not taught by the cited art contending that the language refers to “individually normalizing the three loss functions (e.g., restricting them to [0,1]) to balance the training weights.” While the examiner appreciates applicant’s explanation and interpretation, the cited claim language could encompass such an interpretation, but is much broader and supports other interpretations. For example, the cited language can also be understood to mean that the training model is trained based on the three loss functions after normalization of the data. Additionally, as written it would also not be clear that the phrase “after normalization” modifies, training the training model, all three loss functions, or only the adversarial loss function. Based upon this argument, the examiner will be entering a 112(b) indefiniteness reference. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-2, 4-12, 14-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In particular, the phrase “training the training model based on an encoder loss function, a contextual loss function, and an adversarial loss function after normalization” is unclear. As written it is unclear what the phrase “after normalization” modifies, training the training model, all three loss functions, or only the adversarial loss function. From applicant’s arguments, the phrase shall be interpreted for examination as modifying all three loss functions, after which, the training model is trained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-12, and 14-20 are rejected under 35 USC 103 as being unpatentable over CN109584221, Luo, (English translation from espacenet is provided, unfortunately, the best pinpoint cite is to a page) in view of Difference Detection Between Two Images for Image Monitoring, Feng (hereafter Feng) and Normalized Loss Functions for Deep Learning with Noisy Labels, Ma et al. (hereafter Ma). 1. A defect detecting apparatus, comprising: a storage, being configured to store a defect detecting model; a transceiver interface; and a processor, being electrically connected to the storage and the transceiver interface, and being configured to perform operations comprising: (Lou is a computer based method and therefore, to perform computer processes, a storage storing the information as well as a processor and communications interfaces between the computer components is inherent to a computer) receiving an image to be tested from the transceiver interface; (Lou page 2, summary step 1) detecting the image to be tested received from the transceiver interface through the defect detecting model stored in the storage to generate an anomaly score corresponding to the image to be tested, wherein the defect detecting model is generated based on a training of a generative adversarial network and a plurality of normalized loss functions, (Lou page 2, summary step 2) comparing the anomaly score with an anomaly score threshold to determine whether the image to be tested is a defective image. (Lou page 2, summary step 3) wherein the processor is further configured to perform following operations: receiving a plurality of sample images; (Lou page 2/3, step 101) inputting the sample images to a training model constructed by the generative adversarial network; (Lou page 3 step 102-103) setting the training model after training as the defect detecting model. (Lou page 2, step 2 uses the Ganomoly model) Lou does not disclose wherein the anomaly score is generated by calculating a pixel squared difference between the image to be tested and a reconstructed image corresponding to the image to be tested; and Feng page 3 teaches that using a mean squared difference between two image is a common way to determine how different images are. Therefore it would have been obvious to modify the system of Lou to utilize a mean squared difference for the purposes of determining the difference between two images as the use of a known technique to yield predictable results. training the training model based on an encoder loss function, a contextual loss function, and an adversarial loss function after normalization; and (Lou Ganomaly, -- see Akcay page 628-629 disclosing the three loss functions) Lou and Feng do not explicitly disclose normalizing encoder loss function, a contextual loss function, and an adversarial loss function as argued in the interpretation by applicant and for purposed of this office action only is given such an interpretation. However, Ma page 1 discloses the use of normalizing loss functions for robustness for noisy labels. See also Ma page 9 conclusion, where they state that a simple normalization can make any loss function robust to noisy labels and thus it would have been obvious to modify the system of Lou and Feng to utilize normalization of the loss functions for robustness as taught by Ma. 2. The defect detecting apparatus of claim 1, wherein the reconstructed image corresponding to the image to be tested is generated by a first encoder and a decoder in the defect detecting model. (Lou page 2 summary item 2, ganomaly model – see attached Akcay evidencing the Ganomaly model, page 624, encoder-decoder-encoder) 4. The defect detecting apparatus of claim 1, wherein the encoder loss function after normalization is generated based on a normalized squared difference between a first encoding feature and a second encoding feature, (Lou page 8 loss function is based on the previously normalized Ganomoly model modified by Feng) and the first encoding feature is generated by a first encoder in the defect detecting model, and the second encoding feature is generated by a second encoder in the defect detecting model. (Lou (See also p 9-11 generally) 5. The defect detecting apparatus of claim 1, wherein the reconstructed image corresponding to the image to be tested is generated by a first encoder and a decoder in the defect detecting model. (Lou page 7 preprocessing steps 103, 104, for reconstructed images, which would be an decode of a first image and then encoded into the reconstructed image) Lou and Feng do not disclose wherein the contextual loss function after normalization is generated based on an absolute value of a normalized pixel difference between the image to be tested and the reconstructed image corresponding to the image to be tested Ma page 1 discloses the use of a well-known mean absolute error loss function as they are robust to noisy labels. Thus it would have been obvious to modify Lou/Feng to utilize a well-known loss function based on absolute value of differences for the purposes of being robust to noisy labels. Claim 15 is rejected under a similar rationale. 6. The defect detecting apparatus of claim 1, wherein the adversarial loss function after normalization is generated based on a normalized feature matching squared difference. (Lou page 8 loss function is based on the previously normalized Ganomoly model modified by Feng) 7. The defect detecting apparatus of claim 1, wherein the image to be tested corresponds to a first color space, and the processor further performs following operations: converting the image to be tested to a second color space, wherein the second color space comprises at least one first channel value and a plurality of second channel values; and (Lou page 7 RBG channels individual color values) performing a normalization operation on the at least one first channel value of the image to be tested in the second color space. (Lou page 7 normalize RGB pixel values) (See also p 9-11 generally) 8. The defect detecting apparatus of claim 7, wherein a first range corresponding to the at least one first channel value is different from a second range corresponding to the second channel values. (It’s noted that a range for RGB values in this context is understood as RRGGBB value range, where for a single color it would the CC range, which is different) 9. The defect detecting apparatus of claim 1, wherein the processor further performs following operations: receiving a plurality of sample images, wherein the sample images correspond to a first color space; (Lou page 7 sample images) converting the sample images to a second color space, wherein the second color space comprises at least one first channel value and a plurality of second channel values; (Lou page 7 RBG channels individual color values) performing a normalization operation on the at least one first channel value of the sample images in the second color space; (Lou page 7 normalize RGB pixel values) inputting the sample images to a training model constructed by the generative adversarial network; training the training model based on an encoder loss function, a contextual loss function, and an adversarial loss function after normalization; and (Lou Ganomaly, -- see Akcay page 628-629 disclosing the three loss functions) setting the training model after training as the defect detecting model. (Lou page 2, step 2 uses the Ganomoly model) (See also p 9-11 generally) 10. The defect detecting apparatus of claim 9, wherein a first range corresponding to the at least one first channel value is different from a second range corresponding to the second channel values. (It’s noted that a range for RGB values in this context is understood as RRGGBB value range, where for a single color it would the CC range, which is different) Claims 10-12, 14-20 are rejected under a similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Shui whose telephone number is (303)297-4247. The examiner can normally be reached on 7-5 Pacific Time, M-Th. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Greg Morse can be reached on 571-272-38383838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ming Shui/ Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Jun 07, 2023
Application Filed
Jun 10, 2025
Non-Final Rejection — §103, §112
Aug 19, 2025
Interview Requested
Aug 25, 2025
Examiner Interview Summary
Aug 25, 2025
Applicant Interview (Telephonic)
Sep 11, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103, §112
Jan 22, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Mar 12, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602781
AI-BASED CELL CLASSIFICATION METHOD AND SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12602899
AUTHENTICATION AND IDENTIFICATION OF PHYSICAL OBJECTS USING IMMUTABLE PHYSICAL CODE
2y 5m to grant Granted Apr 14, 2026
Patent 12586234
DETECTION DEVICE DETECTING GAZE POINT OF USER, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12575756
MAGNETIC RESONANCE IMAGING APPARATUS, PHASE CORRECTING METHOD, AND IMAGING CONTROLLING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12573167
METHOD FOR GENERATING AND RECOGNIZING DEFORMABLE OF FIDUCIAL MARKERS BASED ON ARTIFICIAL INTELLIGENCE IN END-TO-END MANNER AND SYSTEM THEREOF
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+50.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 321 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month