DETAILED ACTION
Claims 1, 4-7, and 10-12 are pending in this application. Claims 1, 4-7, and 10-12 have been examined under the filing date of 08/29/2022 in accordance with the applicant’s claim for foreign priority. Claims 1, 4-7, and 10-12 have been amended in this application, claims 2-3 and 8-9 have been canceled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/11/2026 has been entered.
Response to Arguments
35 U.S.C. 103
Applicant's arguments filed on 02/11/2026 have been fully considered by the examiner and are persuasive, however in view of the amendments to the claims, a new grounds of rejection is presented in view of Tao and fully discussed below.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 4-7, and 10-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tao (X. Tao, D. Zhang, W. Ma, Z. Hou, Z. Lu and C. Adak, "Unsupervised Anomaly Detection for Surface Defects With Dual-Siamese Network," in IEEE Transactions on Industrial Informatics, vol. 18, no. 11, pp. 7707-7717, Nov. 2022, doi: 10.1109/TII.2022.3142326).
Regarding claim 1 Tao teaches;
A method for generating a training data set for defective products, the method comprising (Tao, Abstract, a dual network is trained using defect and defect free images):
PNG
media_image1.png
458
452
media_image1.png
Greyscale
(Tao, Abstract)
collecting an original good product image and an original defect image (Tao, section B. Siamese Network of Feature Extraction, section 1) Construct the Dual Input, the input is a defect free image (Is)(original good product image) and its artificial image (Ig ) counterpart, Page, 7708, column 2, paragraph 2, to generate the defect image, the network captures both normal and abnormal (defective) features from images (original defect images));
PNG
media_image2.png
128
436
media_image2.png
Greyscale
(Tao, section B)
PNG
media_image3.png
566
448
media_image3.png
Greyscale
(Tao, page 7708, column 2, emphasis added)
extracting a defective part directly from the original defect image (Tao, Page, 7708, column 2, paragraph 2, to generate the defect image, the network captures both normal and abnormal (defective) features from original images (original defect images), the features are extracted directly from the images);
extracting a depth map or segmentation mask corresponding to the original good product image (Tao, section A. Pipeline, the original non-defect image is input into the model to extract a good/non-defective feature from the image (Fs ), page 7711, column 1, section 3) Dense Feature Fusion (DFF), patches are extracted from both the good/defect free image (Fds ) and the defect image (Fds ) and are output in a “depth” dimension (depth map corresponding to a good image));
PNG
media_image4.png
570
442
media_image4.png
Greyscale
(Tao, section A. pipeline)
PNG
media_image5.png
464
446
media_image5.png
Greyscale
(Tao, Page 7711 emphasis added)
calculating a first latent vector by providing the original good product image as an input to a first encoder (Tao, page 7708, column 1, paragraph 1 and 2, the system uses a deep auto encoder (AE) and a variational autoencoder (VAE) (first and second encoders), page 7708, column 2 paragraph column 2, the AE is trained on provided features/feature vectors from the non-defective data and anomaly data (defective data), page 7708, column 1, the features are passed as embedding vectors to an AE or VAE (first and second encoder));
PNG
media_image6.png
678
458
media_image6.png
Greyscale
(Tao, Page 7708 column 1, emphasis added)
PNG
media_image7.png
566
448
media_image7.png
Greyscale
(Tao, page 7708, column 2, emphasis added)
calculating a second latent vector by applying the defective part to the depth map or segmentation mask and providing an applying result as an input to a second encoder (Tao, page 7710, section 3) Dense Feature Fusion (DFF) the two features of the defective (Fds ) and the non-defective images (Fds ) are fused, then in section C. Siamese Network of Perceptual Defects the fused features are provided to an autoencoder);
and generating the training data set for defective products in which the defective part is mixed with the original good product image (Tao, page 7708, column 1, paragraph 1, the defective and non-defective image features are mixed by the autoencoder, Page 7715, column 1, paragraph 1, the model generates defective data, page 7707, column 2, paragraph 2, defective data is needed to be generated to train models for defect detection, section B. Siamese Network of Feature Extraction, section 1) the defect images used in training are artificial defect images)
by performing mixing of the first latent vector and the second latent vector (Tao, page 7708, column 1, paragraph 1, the defective and non-defective image features are mixed by the autoencoder, where the mixing is in latent space, Page 7715, column 1, paragraph 1, the model generates defective data).
Regarding claim 4 Tao discloses; The method of claim 1, wherein applying the defective part to the depth map includes:
applying the defective part to a depth change point in the depth map (Tao, page 7710, column 2, paragraph 1, the features are detected at different depths within the layers in order to detect changes in texture, detected edges, or shallow feature regions, page 7711, column 1, paragraph 1, the feature patches extracted are then output to a “depth” dimension, the patches are adjusted to the size/features of the regions feature map and then applied/fixed to the image according to the features which are being interpreted as detected changes in texture or shallowness based on Tao page 7710 column 2, paragraph 1 as described above).
Regarding claim 5 Tao discloses; The method of claim 1, wherein applying the defective part to the depth map or segmentation mask includes (Tao, page 7710, section 3) Dense Feature Fusion (DFF) the two features of the defective (Fds ) and the non-defective images (Fds ) are fused, then in section C. Siamese Network of Perceptual Defects the fused features are provided to an autoencoder):
transforming at least one of position, size, and shape of the defective part in response to a random input or an input through an input device (Tao, page 7708, column 1, paragraph 1, the defective and non-defective image features are mixed by the autoencoder, where the mixing is in latent space, Page 7715, column 1, paragraph 1, the model generates defective data, page 7711 column 1, paragraph 1, the patch of defective feature data is resized and shaped to fit the area it is being fixed to, this is performed using input from the Dense Feature Fusion module);
and applying the transformed defective part (Tao, page 7711 column 1, paragraph 1, the patch of defective feature data is resized and shaped to fit the area it is being fixed to, this is performed using input from the Dense Feature Fusion module, the patch is fixed to the image to generate a patched image).
Regarding claim 6 Tao discloses; The method of claim 1, wherein performing mixing includes:
performing adjustment of at least one coefficient value applied to mixing of the latent vectors (Tao, page 7708, column 1, paragraph 1, the defective and non-defective image features are mixed by the autoencoder as feature vectors, where the mixing is in latent space, page 7711 column 1, paragraph 1, the patch of defective feature data is resized and shaped to fit the area it is being fixed to, this is performed using input from the Dense Feature Fusion module, the patch is fixed to the image to generate a patched image, figure 4 shows that architecture of the Dense Feature Fusion module, which performs adjustment of the size values, which is being interpreted as a coefficient adjusting).
Regarding claim 7, Tao discloses;
An electronic device for supporting generation of a data set for training, the electronic device comprising:
a memory storing an original good product image and an original defect image (Tao, page 7712 section 6) Implementation details the system runs on a GPU with 24 GB of memory (processor and a memory), section B. Siamese Network of Feature Extraction, section 1) Construct the Dual Input, the input is a defect free image (Is)(original good product image) and its artificial image (Ig ) counterpart, Page, 7708, column 2, paragraph 2, to generate the defect image, the network captures both normal and abnormal (defective) features from images (original defect images) );
and a processor functionally connected to the memory and configured to (Tao, page 7712 section 6) Implementation details the system runs on a GPU with 24 GB of memory (processor and a memory);
extract a defective part directly from the original defect image (Tao, Page, 7708, column 2, paragraph 2, to generate the defect image, the network captures both normal and abnormal (defective) features from original images (original defect images), the features are extracted directly from the images);
extract a depth map or segmentation mask corresponding to the original good product image (Tao, section A. Pipeline, the original non-defect image is input into the model to extract a good/non-defective feature from the image (Fs ), page 7711, column 1, section 3) Dense Feature Fusion (DFF), patches are extracted from both the good/defect free image (Fds ) and the defect image (Fds ) and are output in a “depth” dimension (depth map corresponding to a good image));
calculate a first latent vector by providing the original good product image as an input to a first encoder (Tao, page 7708, column 1, paragraph 1 and 2, the system uses a deep auto encoder (AE) and a variational autoencoder (VAE) (first and second encoders), page 7708, column 2 paragraph column 2, the AE is trained on provided features/feature vectors from the non-defective data and anomaly data (defective data), page 7708, column 1, the features are passed as embedding vectors to an AE or VAE (first and second encoder));
calculate a second latent vector by applying the defective part to the depth map or segmentation mask and providing an applying result as an input to a second encoder (Tao, page 7710, section 3) Dense Feature Fusion (DFF) the two features of the defective (Fds ) and the non-defective images (Fds ) are fused, then in section C. Siamese Network of Perceptual Defects the fused features are provided to an autoencoder);
and generate the training data set for defective products in which the defective part is mixed with the original good product image (Tao, page 7708, column 1, paragraph 1, the defective and non-defective image features are mixed by the autoencoder, Page 7715, column 1, paragraph 1, the model generates defective data, page 7707, column 2, paragraph 2, defective data is needed to be generated to train models for defect detection, section B. Siamese Network of Feature Extraction, section 1) the defect images used in training are artificial defect images)
by performing mixing of the first latent vector and the second latent vector (Tao, page 7708, column 1, paragraph 1, the defective and non-defective image features are mixed by the autoencoder, where the mixing is in latent space, Page 7715, column 1, paragraph 1, the model generates defective data).
Regarding claim 10, Tao discloses; The electronic device of claim 7, wherein in a process of applying the defective part to the depth map, the processor is configured to: apply the defective part to a depth change point in the depth map (Tao, page 7710, column 2, paragraph 1, the features are detected at different depths within the layers in order to detect changes in texture, detected edges, or shallow feature regions, page 7711, column 1, paragraph 1, the feature patches extracted are then output to a “depth” dimension, the patches are adjusted to the size/features of the regions feature map and then applied/fixed to the image according to the features which are being interpreted as detected changes in texture or shallowness based on Tao page 7710 column 2, paragraph 1 as described above).
Regarding claim 11 Tao discloses; The electronic device of claim 7, wherein in a process of applying the defective part to the depth map or segmentation mask, the processor is configured to (Tao, page 7710, section 3) Dense Feature Fusion (DFF) the two features of the defective (Fds ) and the non-defective images (Fds ) are fused, then in section C. Siamese Network of Perceptual Defects the fused features are provided to an autoencoder):
transform at least one of position, size, and shape of the defective part in response to a random input or an input through an input device (Tao, page 7708, column 1, paragraph 1, the defective and non-defective image features are mixed by the autoencoder, where the mixing is in latent space, Page 7715, column 1, paragraph 1, the model generates defective data, page 7711 column 1, paragraph 1, the patch of defective feature data is resized and shaped to fit the area it is being fixed to, this is performed using input from the Dense Feature Fusion module);
and apply the transformed defective part (Tao, page 7711 column 1, paragraph 1, the patch of defective feature data is resized and shaped to fit the area it is being fixed to, this is performed using input from the Dense Feature Fusion module, the patch is fixed to the image to generate a patched image).
Regarding claim 12 Tao discloses; The electronic device of claim 7, wherein in a process of performing mixing, the processor is configured to:
perform adjustment of at least one coefficient value applied to mixing of the latent vectors (Tao, page 7708, column 1, paragraph 1, the defective and non-defective image features are mixed by the autoencoder as feature vectors, where the mixing is in latent space, page 7711 column 1, paragraph 1, the patch of defective feature data is resized and shaped to fit the area it is being fixed to, this is performed using input from the Dense Feature Fusion module, the patch is fixed to the image to generate a patched image, figure 4 shows that architecture of the Dense Feature Fusion module, which performs adjustment of the size values, which is being interpreted as a coefficient adjusting).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For a listing of analogous art as cited by examiner please refer to the attached PTO-892 Notice of References Cited.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666