DETAILED ACTION
Claims 1-4 are pending in this application and have been amended. Claims 1-4 have been given the priority date of 07/12/2022 in accordance with applicant’s claim for foreign priority.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been received.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07/11/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
35 U.S.C. 112(f)
Applicant’s arguments (see Remarks filed 10/14/2025) regarding the claim interpretation under 35 U.S.C. 112(f) have been fully considered by the examiner and are persuasive. Accordingly, the claim interpretations have been withdrawn.
35 U.S.C. 103
Applicant’s arguments (see Remarks filed 10/14/2025) regarding the rejections made under 35 U.S.C. 103 have been fully considered and are not persuasive. Applicant argues (see Page 8 of Remarks filed 10/14/2025) that neither Takai nor Picard teaches acquiring as the non-defective or defective product data, many product feature quantity vectors. The examiner has performed further search and reconsideration of the art of record, as discussed in the interview on 10/01/2025, and notes that Takai [0062] and [0065] teaches that the input data, which is an image of the defective or non-defective product data, may be input into an encoder and latent vector data may be generated having a plurality of dimension, where according to [0064] this data is used to detect characteristic (feature) differences between the defective data and the non-defective data. One of ordinary skill in the art would consider this analogous to a plurality of feature vectors as claimed in claims 1 and 3. Therefore, for at least these reasons, the examiner is respectfully maintaining the claim rejections made under 35 U.S.C. 103 over Takai, in view of Picard.
PNG
media_image1.png
406
342
media_image1.png
Greyscale
PNG
media_image2.png
356
344
media_image2.png
Greyscale
(Takai, [0062]- [0065])
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
1. Claims 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Takai (US 20230145715 A1) in view of Picard (US 12333702 B2).
Regarding claim 1 Takai discloses;
A teacher data preparation method for preparing non-defective product teacher data and defective product teacher data (Takai, [0040] teacher data is used to train a CNN to evaluated tofu during manufacturing),
in a defect classification model that performs learning using a non-defective product image that is an image of a product or a component with no defect and a defective product image that is an image of a product or a component with a defect (Takai, [0041] the teacher data is prepared to be used to train the model by including an evaluation value that notes whether the product is defective or not defective, therefore the model is trained on both defective and non-defective product images, Figure 4, teacher data: includes images and an evaluation value for whether the product is defective or not),
the teacher data preparation method comprising:
a non-defective product teacher data acquiring step for acquiring, as the non-defective product teacher vectors (Takai, [0062] the learning model generates latent vector data based on the input image data, [0029] a camera captures images of the product on the conveyor belt and then the products are determined as defective or non-defective using features of the surface of the tofu (product being classified), a set region is used for image capturing, and images are captured of the product that is continuously moving on a conveyor belt, where the capture camera data of the defective and non-defective data are used as the inputs for which the vectors are generated),
many pieces of non-defective product feature quantity data obtained by extracting a feature quantity in a predetermined first number of dimensions from a large number of the non-defective product images (Takai, [0029] a camera captures images of the product on the conveyor belt and then the products are determined as defective or non-defective using features of the surface of the tofu (product being classified), a set region is used for image capturing, and images are captured of the product that is continuously moving on a conveyor belt, Figure 2, shows the region R (predetermined dimension) being used to capture image data of the product, the products then being denote as P or P’ depending on if they are defective or non-defective [0042] a large amount of learning data (from the teacher data) is required);
and a defective product teacher data acquiring step for acquiring, as the defective product teacher vectors (Takai, [0062] the learning model generates latent vector data based on the input image data, [0029] a camera captures images of the product on the conveyor belt and then the products are determined as defective or non-defective using features of the surface of the tofu (product being classified), a set region is used for image capturing, and images are captured of the product that is continuously moving on a conveyor belt, where the capture camera data of the defective and non-defective data are used as the inputs for which the vectors are generated),
many pieces of generated defective product feature quantity data obtained by generating the feature quantity in the predetermined first number of dimensions (Takai, [0029] a camera captures images of the product on the conveyor belt and then the products are determined as defective or non-defective using features of the surface of the tofu (product being classified), a set region is used for image capturing, and images are captured of the product that is continuously moving on a conveyor belt, Figure 2, shows the region R (predetermined dimension) being used to capture image data of the product, the products then being denote as P or P’ depending on if they are defective or non-defective [0042] a large amount of learning data (from the teacher data) is required, meaning a set is generated of defective and non-defective data respectively from the large quantity of images/teacher data),
Takai does not teach; by using a generation model that has performed learning using defective product feature quantity vectors obtained by extracting the feature quantity in the predetermined first number of dimensions from the defective product images smaller in number than the non-defective product images.
However in the same field of endeavor of model training for defect detection, Picard teaches; by using a generation model that has performed learning using defective product feature quantity vectors obtained by extracting the feature quantity in the predetermined first number of dimensions from the defective product images smaller in number than the non-defective product images (Picard, Column 5, lines 5- 67 and column 6 lines 1-11, the model is trained on images of parts that are defective and non- defective, using dimensional adjustments performed by the model with a first and second number of dimensions performed by an encoder and decoder. Column 5 lines 25-31, the training data may have only no-fault images, with the option to have images of parts with faults/defects, the decision matrix of figure 4 shows the OK column (no defect) having a true positive value of 109 and the true negative column of KO (defected) products being 108, column 6 lines 7-26, the autoencoder of the system passes the defect data in vector form).
The combination of Takai and Picard would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Takai teaches a method of using images to detect defects in products on a conveyor using machine learning, however it does not teach the number of the non-defect training images being higher the defected images, or the use encoders and decoders in the model to train the model using multiple dimensions. Picard teaches this deficiency in the same field of using generation models to detect product defects. The motivation for the combination lies in that the use of decoders selecting features in multiple dimensions to train the model allows for better control of the calculation of the metrics in latent space (See Picard Column 1 lines 60-67 and column 2 lines 1-60).
Regarding claim 2 the combination of Takai and Picard teaches; The teacher data preparation method according to claim 1, wherein the defective product teacher data acquiring step includes:
a first learning step for learning weighting of an encoder and a decoder to minimize a reconstruction error between an original image and a reconstructed image (Takai, [0062] the model includes and encoder and a decoder, [0063] the parameters for the encoder and decoder are adjusted to reduce error),
in a variational auto encoder (VAE) including the encoder and the decoder (Picard, Column 2, Lines 54-60, the system uses a variational autoencoder), when the defective product image is input as the original image (Picard, column 5 lines 10-20 in a first embodiments the input image is an image associated with a fault/defect), the encoder reducing a dimension of the feature quantity that has been extracted from the original image (Picard, Column 6 lines 8-11, the encoder constructs reduced dimension version of the input data) and calculating a latent variable in a predetermined second number of dimensions (Picard, Column 6 lines 12-26, the a vector (latent variable) is formed from the reduction of the data by the encoder) and a probability distribution of the latent variable (Picard, Column 6 lines 55-61, the auto-encoder is trained such that the projections from the latent variables follow probability law, such as uniform law of multivariate gaussian law (distribution)), the decoder reconstructing the original image from the latent variable and the probability distribution and outputting the reconstructed image (Picard, Column 6 lines 8-11, the encoder constructs reduced dimension version of the input data, the decoder reconstructs the data from the reduced size data, Column 6 lines 55-61, the auto-encoder is trained such that the projections from the latent variables follow probability law, such as uniform law of multivariate gaussian law (distribution), Column 6 lines 38-61, the auto-encoder (containing an encoder and a decoder), reconstructs and outputs a reconstructed image, the output images follow a probability law);
a correct answer data acquiring step for extracting the feature quantity in the predetermined first number of dimensions from the defective product image to acquire as learning correct answer data (Picard, column 7 lines 5-16, the training steps may include a step of “validation” where the model has been previously trained, metrics (features) are calculated in mathematical space (first dimension), column 7 lines 1-5, the model having been trained on images without faults, or a “correct answer”/ground truth step);
a feature quantity vector acquiring step for acquiring a feature quantity vector in the predetermined second number of dimensions corresponding to the defective product image from the probability distribution of the latent variable of the VAE that has performed the learning (Picard, Column 7, lines 56-67, and column 8 lines 1-4, metrics (features/feature quantity) are determined from the projections of the reconstructed image in latent space (second set of dimensions) to determine if an image has a fault or defect (defect determination), this step takes place after the distribution step);
a second learning step for learning weighting of a multilayer perceptron (MLP) decoder (Picard, column 4 lines 59-64, the network can be a multi – layer perceptron) to minimize a loss between the generated defective product feature quantity data and the learning correct answer vectors (Picard, Column 6, lines 27-31, the fourth step of training consists of updating the parameters (weighting) the synaptic coefficients of the network to minimize error between the reconstructed data and the original data (the generated defective data and the learning correct answer data), column 6 lines 7-26, the autoencoder of the system passes the defect data in vector form), in the MLP decoder configured to generate and output the generated defective product feature quantity vectors in the predetermined first number of dimensions (Picard, column 6 lines 12-26, the dataset is passed through the model in the form of latent vectors, column 7 lines 13-35, the fifth step consists of metrics (feature quantity data) being calculated for the reconstructed image (defective data) in mathematical space/latent space (first dimensions), Column 8 lines 12-22, the decoder reconstructs and outputs reconstructed images with a set number of metrics (features) computed for the image with the defect), when the feature quantity vector in the predetermined second number of dimensions that has been acquired is input (Picard, column 8 lines 1-21, to output the metrics and images in the seventh step the system must be trained according to the fourth step, detailed in column 6 lines 47-60, where the feature vector is obtained for the reconstructed image);
and a defective product feature quantity vectors generating step for inputting a large number of feature quantity vectors in the predetermined second number of dimensions that have been acquired by random sampling from the probability distribution of the latent variable of the VAE that has performed the learning into the MLP decoder that has performed the learning to generate many pieces of generated defective product feature quantity vectors (Picard column 6 lines 38-60, the reconstructed fault images are generated from projecting the image into mathematical space (set of dimensions) and the decoder reconstructs the image from this input, the reconstructed images follow a probability distribution law, Column 6 lines 8-31, the vector is generated from the reduced size projection, Column 8 lines 12-38, the final step of the training is reconstructing the image, further figure 4 shows the results of the detection/training, indicating a large number of inputs/outputs of the model, further indicating a large number of reconstructed images).
The combination of Takai and Picard would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Takai teaches a method of using images to detect defects in products on a conveyor using machine learning, however it does not teach the number of the non-defect training images being higher the defected images, or the use an MLP and encoders and decoders in the model to train the model using multiple dimensions. Picard teaches this deficiency in the same field of using generation models to detect product defects. The motivation for the combination lies in that the use of decoders selecting features in multiple dimensions to train the model allows for better control of the calculation of the metrics in latent space, further the use of an MLP and a Variational auto-encoder allows for more control over training and minimizing error in training (See Picard Column 1 lines 60-67 and column 2 lines 1-60).
Regarding claim 3 the combination of Takai and Picard teaches; A teacher data preparation device that prepares non-defective product teacher data and defective product teacher data (Takai, [0040] teacher data is used to train a CNN to evaluated tofu during manufacturing),
in a defect classification model that performs learning using a non-defective product image that is an image of a product or a component with no defect and a defective product image that is an image of a product or a component with a defect (Takai, [0041] the teacher data is prepared to be used to train the model by including an evaluation value that notes whether the product is defective or not defective, therefore the model is trained on both defective and non-defective product images, Figure 4, teacher data: includes images and an evaluation value for whether the product is defective or not),
wherein the teacher data preparation device comprises a processor, the processor configures to (Takai, figure 1 control device, abstract, the control device has a processor for executing instructions):
vectors obtained by extracting a feature quantity in a predetermined first number of dimensions from a large number of the non-defective product images (Takai, [0029] a camera captures images of the product on the conveyor belt and then the products are determined as defective or non-defective using features of the surface of the tofu (product being classified), a set region is used for image capturing, and images are captured of the product that is continuously moving on a conveyor belt, Figure 2, shows the region R (predetermined dimension) being used to capture image data of the product, the products then being denote as P or P’ depending on if they are defective or non-defective [0042] a large amount of learning data (from the teacher data) is required, [0062] the learning model generates latent vector data based on the input image data, [0029] a camera captures images of the product on the conveyor belt and then the products are determined as defective or non-defective using features of the surface of the tofu (product being classified), a set region is used for image capturing, and images are captured of the product that is continuously moving on a conveyor belt, where the capture camera data of the defective and non-defective data are used as the inputs for which the vectors are generated);
vectors, many pieces of generated defective product feature quantity vectors obtained by generating the feature quantity in the predetermined first number of dimensions (Takai, [0029] a camera captures images of the product on the conveyor belt and then the products are determined as defective or non-defective using features of the surface of the tofu (product being classified), a set region is used for image capturing, and images are captured of the product that is continuously moving on a conveyor belt, Figure 2, shows the region R (predetermined dimension) being used to capture image data of the product, the products then being denote as P or P’ depending on if they are defective or non-defective [0042] a large amount of learning data (from the teacher data) is required, meaning a set is generated of defective and non-defective data respectively from the large quantity of images/teacher data, [0062] the learning model generates latent vector data based on the input image data, [0029] a camera captures images of the product on the conveyor belt and then the products are determined as defective or non-defective using features of the surface of the tofu (product being classified), a set region is used for image capturing, and images are captured of the product that is continuously moving on a conveyor belt, where the capture camera data of the defective and non-defective data are used as the inputs for which the vectors are generated),
by using a generation model that has performed learning using defective product feature quantity vectors obtained by extracting the feature quantity in the predetermined first number of dimensions from the defective product images smaller in number than the non-defective product images (Picard, Column 5, lines 5- 67 and column 6 lines 1-11, the model is trained on images of parts that are defective and non- defective, using dimensional adjustments performed by the model with a first and second number of dimensions performed by an encoder and decoder. Column 5 lines 25-31, the training data may have only no-fault images, with the option to have images of parts with faults/defects, the decision matrix of figure 4 shows the OK column (no defect) having a true positive value of 109 and the true negative column of KO (defected) products being 108, column 6 lines 7-26, the autoencoder of the system passes the defect data in vector form).
The combination of Takai and Picard would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Takai teaches a method of using images to detect defects in products on a conveyor using machine learning, however it does not teach the number of the non-defect training images being higher the defected images, or the use encoders and decoders in the model to train the model using multiple dimensions. Picard teaches this deficiency in the same field of using generation models to detect product defects. The motivation for the combination lies in that the use of decoders selecting features in multiple dimensions to train the model allows for better control of the calculation of the metrics in latent space (See Picard Column 1 lines 60-67 and column 2 lines 1-60).
Regarding claim 4 the combination of Takai and Picard teaches; The teacher data preparation device according to claim 3, wherein when acquiring the defective product teacher data, the processor is configured to:
Takai, [0062] the model includes and encoder and a decoder, [0063] the parameters for the encoder and decoder are adjusted to reduce error),
in a variational auto encoder (VAE) including the encoder and the decoder (Picard, Column 2, Lines 54-60, the system uses a variational autoencoder), when the defective product image is input as the original image (Picard, column 5 lines 10-20 in a first embodiments the input image is an image associated with a fault/defect), the encoder reducing a dimension of the feature quantity that has been extracted from the original image (Picard, Column 6 lines 8-11, the encoder constructs reduced dimension version of the input data) and calculating a latent variable in a predetermined second number of dimensions (Picard, Column 6 lines 12-26, the a vector (latent variable) is formed from the reduction of the data by the encoder) and a probability distribution of the latent variable (Picard, Column 6 lines 55-61, the auto-encoder is trained such that the projections from the latent variables follow probability law, such as uniform law of multivariate gaussian law (distribution)), the decoder reconstructing the original image from the latent variable and the probability distribution and outputting the reconstructed image (Picard, Column 6 lines 8-11, the encoder constructs reduced dimension version of the input data, the decoder reconstructs the data from the reduced size data, Column 6 lines 55-61, the auto-encoder is trained such that the projections from the latent variables follow probability law, such as uniform law of multivariate gaussian law (distribution), Column 6 lines 38-61, the auto-encoder (containing an encoder and a decoder), reconstructs and outputs a reconstructed image, the output images follow a probability law);
(Picard, column 7 lines 5-16, the training steps may include a step of “validation” where the model has been previously trained, metrics (features) are calculated in mathematical space (first dimension), column 7 lines 1-5, the model having been trained on images without faults, or a “correct answer”/ground truth step);
(Picard, Column 7, lines 56-67, and column 8 lines 1-4, metrics (features/feature quantity) are determined from the projections of the reconstructed image in latent space (second set of dimensions) to determine if an image has a fault or defect (defect determination), this step takes place after the distribution step);
(Picard, column 4 lines 59-64, the network can be a multi – layer perceptron) to minimize a loss between the generated defective product feature quantity vectors and the learning correct answer data (Picard, Column 6, lines 27-31, the fourth step of training consists of updating the parameters (weighting) the synaptic coefficients of the network to minimize error between the reconstructed data and the original data (the generated defective data and the learning correct answer data), column 6 lines 7-26, the autoencoder of the system passes the defect data in vector form), in the MLP decoder configured to generate and output the generated defective product feature quantity vectors in the predetermined first number of dimensions (Picard, column 7 lines 13-35, the fifth step consists of metrics (feature quantity data) being calculated for the reconstructed image (defective data) in mathematical space/latent space (first dimensions), Column 8 lines 12-22, the decoder reconstructs and outputs reconstructed images with a set number of metrics (features) computed for the image with the defect, column 6 lines 7-26, the autoencoder of the system passes the defect data in vector form), when the feature quantity vector in the predetermined second number of dimensions that has been acquired is input (Picard, column 8 lines 1-21, to output the metrics and images in the seventh step the system must be trained according to the fourth step, detailed in column 6 lines 47-60, where the feature vector is obtained for the reconstructed image);
and a defective product feature quantity data generation unit (Picard, figure 2, 303 classifier) configured to input a large number of feature quantity vectors in the predetermined second number of dimensions that have been acquired by random sampling from the probability distribution of the latent variable of the VAE that has performed the learning into the MLP decoder that has performed the learning to generate many pieces of generated defective product feature quantity data (Picard column 6 lines 38-60, the reconstructed fault images are generated from projecting the image into mathematical space (set of dimensions) and the decoder reconstructs the image from this input, the reconstructed images follow a probability distribution law, Column 6 lines 8-31, the vector is generated from the reduced size projection, Column 8 lines 12-38, the final step of the training is reconstructing the image, further figure 4 shows the results of the detection/training, indicating a large number of inputs/outputs of the model, further indicating a large number of reconstructed images).
The combination of Takai and Picard would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Takai teaches a method of using images to detect defects in products on a conveyor using machine learning, however it does not teach the number of the non-defect training images being higher the defected images, or the use an MLP and encoders and decoders in the model to train the model using multiple dimensions. Picard teaches this deficiency in the same field of using generation models to detect product defects. The motivation for the combination lies in that the use of decoders selecting features in multiple dimensions to train the model allows for better control of the calculation of the metrics in latent space, further the use of an MLP and a Variational auto-encoder allows for more control over training and minimizing error in training (See Picard Column 1 lines 60-67 and column 2 lines 1-60).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666