Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 4-12, 15-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Regarding claims 1, 12 and 21 amended with previously rejected claims 2-3 and 13-14 in challenge of the Official Notice taken previously, new prior art is introduced in rejections below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-12, 15-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ida et al. (US2022/0067514) in view of Wang et al. (CN112967264) and Yamashita et al. (US2023/0281797)
To claim 1. Ida teach an apparatus, comprising:
a processor configured to execute a plurality of instructions; and a memory storing the plurality of instructions (inherent), wherein execution of the plurality of instructions configures the processor to:
output a defect prediction score of the input image (Fig. 1, inference result; paragraphs 0037, 0044-0045, 0068, 0070-0074, outputs an inference result relating to the input signal and corresponding to the statistic) through the use of a neural network (paragraphs 0034-0035) provided reference image, the input image (paragraphs 0049, 0126, input signal/captured image, wherein normal product image may obviously be used) and an enhanced image (paragraph 0038, 0083, emphasized and superimposed intermediate partial image),
wherein the neural network comprises an attention map modulator configured to adaptively adjust an intensity of an attention map (paragraph 0049, considering intensity fluctuations in process; paragraphs 0079-0080, luminance values change; paragraphs 0115-0121, light intensity at respective time is adapted in generating intermediate partial signal that is a basis for the interest map; paragraphs 0076, 0085, pixel value of the interest map is adaptively adjusted) generated during the use of the neural network (S1801-S1802 of Fig. 18; paragraphs 0038, 0075-0076, 0081-0083, 0088-0090, intermediate partial image extracted and generated as the interest map in association with the input image, intermediate partial image can be served as the source of the mean value selected as the maximum value can be used as an interest map relating to defects, presence/absence of defects can be directly recognized on the basis of the pixel value in the intermediate partial image serving as the interest map, the grounds for identification are clear even when a neural network is used).
But, Ida do not expressly disclose generating an enhanced image based on an input image and a neural network provided reference image, in which a defective area is emphasized, the generating comprising: obtaining a differential image based on the input image and the reference image; and adjusting an intensity of a defective area included in the differential image to generate the enhanced image.
In furthering said obviousness, Wang teach a defect detection apparatus (Figs. 1-6) using trained neural network (paragraphs 0017, 0031, 0222) to determine defect with using defect-free sample product image, captured product image, and region/area image (Fig. 3; paragraphs 0068, 0109, 0127, 0147, 0149-0150, the product image and the matching template image can be input into the feature extraction network respectively, and the first feature map and the second feature map can be output; the first feature map and the second feature map can be input into the correlation attention network, the correlation attention analysis can be performed on the first feature map and the second feature map to obtain the correlation attention map, wherein correlation attention values are adaptively adjusted).
Yamashita teach a defect discrimination apparatus (abstract, Figs. 4, 8, 16) generating an enhanced image based on an input image and a neural network provided reference image, in which a defective area is emphasized, the generating comprising: obtaining a differential image based on the input image and the reference image (S140 of Fig. 4, paragraph 0076; S435 of Fig. 8, paragraph 0091; S610 of Fig. 16, paragraph 0125; paragraph 0037, generate a difference image by extracting difference between reference image and inspection image; paragraph 0146, deep learning using neural network); and adjusting an intensity of a defective area included in the differential image to generate the enhanced image (S630 of Fig. 16, paragraph 0130, corresponding areas of the difference image are emphasized by increasing the brightness of the difference image).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate teaching of Wang and Yamashita into the apparatus of Ida, in order to further implementation of defect-free sample product image.
To claim 12, Ida, Wang and Yamashita teach a processor-implemented method (as explained in response to claim 1 above).
To claim 21, Ida, Wang and Yamashita teach a processor-implemented method (as explained in response to claim 1 above).
To claims 4 and 15, Ida, Wang and Yamashita teach claims 1 and 13.
Ida, Wang and Yamashita teach wherein the neural network further comprises: a feature extractor configured to receive the reference image and the input image and extract a feature map; and an attention modulator configured to receive the feature map and the enhanced image and output a modulated feature map (Wang, Figs. 3-6).
To claims 5 and 16, Ida, Wang and Yamashita teach claims 4 and 15.
Ida, Wang and Yamashita teach wherein the attention modulator operates as reflecting that the attention modulator was trained to increase weights of defect-associated values included in the feature map (Ida, paragraphs 0076, 0079, maximum values associated with interest map increases).
To claims 6 and 17, Ida, Wang and Yamashita teach claims 4 and 15.
Ida, Wang and Yamashita teach wherein the attention modulator comprises: an attention map generator configured to receive the enhanced image and output a modulated attention map (Ida, Figs. 15-16); and a feature modulator configured to receive the modulated attention map and the feature map and output the modulated feature map (obviously embedded in Ida and Wang; and, having attention unit used to generate attention map to modulate the feature map is well-known in the art, which would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to recognize since modulated feature map is a representation of detected features within a Convolutional Neural Network (CNN) that has been dynamically adjusted, or modulated, to emphasize certain aspects or directions, which is typical in automatic modulation classification and object detect, hence Official Notice is also taken).
To claims 7 and 18, Ida, Wang and Yamashita teach claims 6 and 17.
Ida, Wang and Yamashita teach wherein the attention map generator comprises: a generator configured to receive the enhanced image and output the attention map; and the attention map modulator further configured to receive the attention map and output the modulated attention map, wherein the modulated attention map is obtained by adaptively adjusting an intensity of the attention map (as explained in responses to claims 1 and 6 above).
To claims 8 and 19, Ida, Wang and Yamashita teach claims 6 and 17.
Ida, Wang and Yamashita teach wherein the feature modulator is configured to output the modulated feature map by applying the modulated attention map to the feature map (as explained in response to claim 6 above).
To claims 9 and 20, Ida, Wang and Yamashita teach claims 6 and 17.
Ida, Wang and Yamashita teach wherein the feature modulator is configured to: apply the modulated attention map to a portion of the feature map, and output the modulated feature map by concatenating a result of applying the modulated attention map and a remaining portion of the feature map excluding the portion (Wang, paragraphs 0013, 0027, 0135, 0155, 0218, fusing feature map; which further explanation with response to claim 6 above).
To claim 10, Ida, Wang and Yamashita teach claim 6.
Ida, Wang and Yamashita teach wherein the feature modulator is configured to perform an elementwise operation on at least a portion of the feature map and the modulated attention map (obvious in Fig. 17 of Ida, feature-wise transformation on each element of an input feature map; since it’s a well-known technique in the art, hence Official Notice is also taken).
To claim 11, Ida, Wang and Yamashita teach claim 4.
Ida, Wang and Yamashita teach wherein the neural network further comprises: a defect classifier configured to receive the modulated feature map and output the defect prediction score of the input image (Ida, Fig. 17, paragraphs 0091-0101; and, as explained in response to claim 1 above).
To claim 22, Ida, Wang and Yamashita teach claim 21.
Ida, Wang and Yamashita teach wherein the neural network adaptively adjusts attention map intensities corresponding to the defective area in the generating of the attention map (as explained in response to claim 1 above).
To claim 23, Ida, Wang and Yamashita teach claim 21.
Ida, Wang and Yamashita teach further comprising generating, by the neural network, a defect prediction score for the input image indicating a likelihood of whether the input image contains a defect within the determined defective area (as explained in response to claim 1 above, probability of defect).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHIYU LU whose telephone number is (571)272-2837. The examiner can normally be reached Weekdays: 8:30AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ZHIYU . LU
Primary Examiner
Art Unit 2669
/ZHIYU LU/Primary Examiner, Art Unit 2665 February 11, 2026