Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/25/2025 has been entered.
Response to Amendment
The previous 35 U.S.C. 112(a) rejections are withdrawn due to Applicant’s amendments.
The previous 35 U.S.C. 112(b) rejections are withdrawn due to Applicant’s amendments.
Response to Arguments
Applicant’s arguments filed 03/25/2025 on pages 9-11 regarding the rejection under 35 U.S.C. 103 with respect to claims 1-4, 6-10, 12-17 and 19-20 have been fully considered but are moot. New references Northcutt and Zvitia have been incorporated below to teach the newly presented limitations.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 3, 4, 7, 8, 9, 10, 13, 14, 15, 16, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (US20210374941A1); hereinafter Liu in view of Northcutt et al. (Confident Learning: Estimating Uncertainty in Dataset Labels); hereinafter Northcutt in view of Adler et al. (US20210010953A1); hereinafter Adler and in further view of Mayr et al. (US20200090314A1); hereinafter Mayr
Claim 1 is rejected over Liu, Northcutt, Adler, and Mayr.
Regarding claim 1, Liu teaches a method for classifying manufacturing defects comprising: (“a classification network and a locating detection network are provided based on the product defect type, so that in the product defect detection process, the defects that may exist in the product image can be classified using the classification algorithm first;”; [0017] and [0052])
training a first machine learning model with a training dataset including a plurality of labeled data samples; (“training the classification network (first machine learning model) by using a sample image of a product containing different defect types to obtain a classification network capable of classifying the defect types existing in the sample image.”; See FIG. 2 and [0055] S2200; Note: The training dataset includes a plurality of labeled data samples)
receiving an input dataset including first product data and second product data, wherein the first product data includes a first image of a product in a production line and the second product data includes a second image of the product in the production line; (“The detection method of the present embodiment can meet the requirements of the production line and improve the efficiency of the production line.“; [0066]; and [0005]; and “inputting a classification result (first product data) and a detection result (second product data) obtained into the judgment network to judge whether the product has a defect, and detecting a defect type and a defect position when the product has a defect.”; [0063])
detecting, based on the classification, that the second image contains a manufacturing defect of the product in the production line and (“The detection method of the present embodiment can meet the requirements of the production line and improve the efficiency of the production line.“; [0066]; and “inputting a classification result (first product data) and a detection result (second product data) obtained into the judgment network to judge whether the product has a defect, and detecting a defect type and a defect position when the product has a defect.”; [0063])
Liu does not teach identifying, from the training dataset, a data sample from the plurality of labeled data samples that includes a label that is predicted, via a deep learning model, to be an erroneous label, wherein the identifying of the data sample is based on estimating a joint probability distribution between the label and a true label for the data sample;
based on identifying the data sample that includes the label that is predicted to be an erroneous label, labeling the data sample with a second label for indicating that the data sample bears the erroneous label;
However, Northcutt teaches identifying, from the training dataset, a data sample from the plurality of labeled data samples that includes a label that is predicted, via a deep learning model, to be an erroneous label, wherein the identifying of the data sample is based on estimating a joint probability distribution between the label and a true label for the data sample; (“we estimate both jointly by directly estimating the joint distribution of label noise, p(ỹ, y*). Our goal is to estimate every p(ỹ, y*) as a matrix … to find all mislabeled examples x in dataset X where y*≠ ỹ”; page 4: Note: See Table 1 where ỹ is an observed, noisy label and y* is the unknown, true, uncorrupted label.)
based on identifying the data sample that includes the label that is predicted to be an erroneous label, labeling the data sample with a second label for indicating that the data sample bears the erroneous label; (“we estimate both jointly by directly estimating the joint distribution of label noise, p(ỹ, y*). Our goal is to estimate every p(ỹ, y*) as a matrix … to find all mislabeled examples x in dataset X where y*≠ ỹ”; page 4; Note: The mislabeled examples x are the erroneous label.)
It would have been obvious before the effective filing date to combine classification of manufacturing defects of Liu with the data cleaning by removing mislabeled examples of Northcutt to effectively increase model accuracy by cleaning data prior to training (Northcutt, Abstract). Liu and Northcutt are analogous art because they both concern classification of data samples.
Liu does not teach training a second machine learning model to learn features of the data sample labeled with the second label wherein the training of the second machine learning model includes computing a decision boundary based on the features of the data sample labeled with the second label for separating the data sample from a second data sample of the plurality of labeled data samples;
invoking the second machine learning model for separating the first product data from the second product data based on the decision boundary, based on a first feature of the first product data, and based on a second feature of the second product data;
in response to separating the first product data from the second product data, removing the first product data from the input dataset and invoking the first machine learning model for generating a classification based the second product data;
However, Adler teaches training a second machine learning model to learn features of the data sample labeled with the second label wherein the training of the second machine learning model includes computing a decision boundary based on the features of the data sample labeled with the second label for separating the data sample from a second data sample of the plurality of labeled data samples; (and “the system may classify, using a first machine-learning model (second machine learning model), the object of interest into an inlier category or an outlier category. The first machine-learning model may be trained based on a number of non-labeled samples during the unsupervised training process. The object of interest may be classified based on a first set of features of the object of interest and a first set of pre-determined features associated with a number of compliant samples. The object of interest may be classified into the inlier category when the object of interest meets a boundary criterion of an inlier sample model in a feature space, which characterized by one or more features of the first set of features. In particular embodiments, the object of interest may be classified into the outlier category when the object of interest falls beyond a boundary criterion of an inlier sample model in a feature space, which is characterized by one or more features of the first set of features.”; [0116])
invoking the second machine learning model for separating the first product data from the second product data based on the decision boundary, based on a first feature of the first product data, and based on a second feature of the second product data; (and “the system may classify, using a first machine-learning model (second machine learning model), the object of interest into an inlier category or an outlier category. The first machine-learning model may be trained based on a number of non-labeled samples during the unsupervised training process. The object of interest may be classified based on a first set of features of the object of interest and a first set of pre-determined features associated with a number of compliant samples. The object of interest may be classified into the inlier category when the object of interest meets a boundary criterion of an inlier sample model in a feature space, which characterized by one or more features of the first set of features. In particular embodiments, the object of interest may be classified into the outlier category when the object of interest falls beyond a boundary criterion of an inlier sample model in a feature space, which is characterized by one or more features of the first set of features.”; [0116])
in response to separating the first product data from the second product data, removing the first product data from the input dataset and invoking the first machine learning model for generating a classification based the second product data; (“Then, the system may use a first ML model to classify the inspected samples into inliers (i.e., compliant samples) and outliers (i.e., non-compliant samples) based on the first set of features. For the non-compliant samples, the system may extract a second set of features from the X-ray images of the inspected samples and use a second ML model (the first machine learning model) to classify them into respective defect categories based on the second set of features”; [0006]; Note: The outlier non-compliant samples (second product data) are isolated from the inlier compliant samples (first product data) for further defect analysis.)
It would have been obvious before the effective filing date to combine the identification of data samples of Liu with the inspected sample classification of inliers and outliers to improve the precision and accuracy for identifying and classifying defects (Adler, [0005]). Liu and Adler are analogous art because they both concern manufacturing defect classification. Examiner notes that the first model of Adler is being used as the second model and the second model of Adler is being used as the first model.
Liu does not teach removing the product from the production line based on the detecting of the manufacturing defect.
However, Mayr teaches removing the product from the production line based on the detecting of the manufacturing defect. (“the method also includes triggering a response when the object is classified to be in an abnormal condition. For example, the response may include providing an alarm, such as an audible alarm, a tactile alarm, etc. Other exemplary responses include: triggering a message (e.g. text message) to be sent to a computing device, automatically recording the classification of the object as normal, abnormal or with respect to any classification attribute (such recording may be in digital or any other form and in any computing device or storage medium), turning on particular signaling lights, stopping the production line (e.g., conveyor), removing the object deemed abnormal from a production line (e.g., conveyor), activating a sensor (e.g., camera) to monitor the removal of the object, etc.,”; [0014])
It would have been obvious before the effective filing date to combine classification of manufacturing defects of Liu with the removal of abnormal objects from a production line of Mayr to improve product quality (Mayr [0058]). Liu and Mayr are analogous art because they both concern manufacturing defect classification.
Claim 2 is rejected over Liu, Northcutt, Adler, and Mayr with the incorporation of claim 1.
Regarding claim 2, Liu teaches does not teach wherein the label is predicted to bear the erroneous label based on a confidence level of the label being computed to be below a set threshold.
However, Northcutt teaches wherein the label is predicted to bear the erroneous label based on a confidence level of the label being computed to be below a set threshold. (“likely belong to class y*=j, determined by a per-class threshold, tj. Formally, the definition of the confident joint is … and the threshold tj is the expected (average) self-confidence for each class”; page 6; and “Low self-confidence is a heuristic-likelihood of being a label error.”; page 5)
It would have been obvious before the effective filing date to combine classification of manufacturing defects of Liu with the data cleaning by removing mislabeled examples of Northcutt to effectively increase model accuracy by cleaning data prior to training (Northcutt, Abstract). Liu and Northcutt are analogous art because they both concern classification of data samples.
Claim 3 is rejected over Liu, Northcutt, Adler, and Mayr with the incorporation of claim 1.
Regarding claim 3, Liu teaches wherein the first product data is associated with a confidence level below a set threshold, and (“The smaller the classification value is, the lower the accuracy rate of defect classification by the classification network is, and the less reliable the classification result is.”; [0069]; and “the second type of defect (first product data) is a defect for which the classification accuracy rate of the classification network is not greater than a first threshold value.”; [0072])
Liu does not teach the second product data is associated with a confidence level above the set threshold.
However, Northcutt teaches the second product data is associated with a confidence level above the set threshold. (“The choice of which examples to remove is made by ranking the examples based on predicted probabilities … select [prune] examples with lowest self-confidence”; page 9; Note: The second data sample is one that was kept, and is therefore above the pruning self-confidence cutoff threshold.)
It would have been obvious before the effective filing date to combine classification of manufacturing defects of Liu with the data cleaning by removing mislabeled examples of Northcutt to effectively increase model accuracy by cleaning data prior to training (Northcutt, Abstract). Liu and Northcutt are analogous art because they both concern classification of data samples.
Claim 4 is rejected over Liu, Northcutt, Adler, and Mayr with the incorporation of claim 1.
Regarding claim 4, Liu does not teach wherein the training of the second machine learning model includes invoking supervised learning based on the learned features of the data sample.
However, Adler teaches wherein the training of the second machine learning model includes invoking supervised learning based on the learned features of the data sample. (“the machine-learning (ML) models may be trained by pre-labeled defective samples (e.g., parts with different types of defects) during a supervised learning process. In particular embodiments, the machine-learning (ML) models may be trained by un-labeled samples during an unsupervised learning process.”; [0105])
It would have been obvious before the effective filing date to combine the identification of data samples of Liu with the supervised learning process of Adler to improve the precision and accuracy for identifying and classifying defects (Adler, [0005]). Liu and Adler are analogous art because they both concern manufacturing defect classification.
Claim 7 is rejected over Liu, Northcutt, Adler, and Mayr with the incorporation of claim 1.
Regarding claim 7, Liu does not teach generating a signal based on the classification, wherein the signal is for triggering an action.
However, Mayr teaches generating a signal based on the classification, wherein the signal is for triggering an action. (“the method also includes triggering a response when the object is classified to be in an abnormal condition. For example, the response may include providing an alarm, such as an audible alarm, a tactile alarm, etc. Other exemplary responses include: triggering a message (e.g. text message) to be sent to a computing device, automatically recording the classification of the object as normal, abnormal or with respect to any classification attribute (such recording may be in digital or any other form and in any computing device or storage medium), turning on particular signaling lights, stopping the production line (e.g., conveyor), removing the object deemed abnormal from a production line (e.g., conveyor), activating a sensor (e.g., camera) to monitor the removal of the object, etc.,”; [0014])
It would have been obvious before the effective filing date to combine classification of manufacturing defects of Liu with the removal of abnormal objects from a production line of Mayr to improve product quality (Mayr [0058]). Liu and Mayr are analogous art because they both concern manufacturing defect classification.
Claim 8 is rejected over Liu, Northcutt, Adler, and Mayr.
Regarding claim 8, Liu teaches a system for classifying manufacturing defects, the system comprising: (“As shown in FIG. 1, the product defect detection system 100 comprises an image acquisition device 1000 and a product defect detection device 2000.”; [0035])
processor; and (“In the present embodiment, referring to FIG. 1, the product defect detection device 2000 may comprise a processor 2100, a memory 2200, an interface device 2300, a communication device 2400, a display device 2500, an input device 2600, a speaker 2700, a microphone 2800, etc.”; [0039])
memory, wherein the memory has stored therein instructions that, when executed by the processor, cause the processor to: (“In the present embodiment, referring to FIG. 1, the product defect detection device 2000 may comprise a processor 2100, a memory 2200, an interface device 2300, a communication device 2400, a display device 2500, an input device 2600, a speaker 2700, a microphone 2800, etc.”; [0039])
The remainder of claim 8 is claim 1 in the form of a system and is rejected for the same reasons as claim 1 stated above.
Dependent claim 9 is claim 3 in the form of a system and is rejected for the same reasons claim 3 stated above. For the rejection of the limitations specifically pertaining to the system of claim 8, see the rejection of claim 8 above.
Dependent claim 10 is claim 4 in the form of a system and is rejected for the same reasons claim 4 stated above. For the rejection of the limitations specifically pertaining to the system of claim 8, see the rejection of claim 8 above.
Dependent claim 13 is claim 7 in the form of a system and is rejected for the same reasons claim 7 stated above. For the rejection of the limitations specifically pertaining to the system of claim 8, see the rejection of claim 8 above.
Claim 14 is rejected over Liu, Northcutt, Adler, and Mayr.
Regarding claim 14, Liu teaches a system for classifying manufacturing defects, the system comprising: (“As shown in FIG. 1, the product defect detection system 100 comprises an image acquisition device 1000 and a product defect detection device 2000.”; [0035])
a data collection circuit configured to collect an input dataset; and (“The image acquisition device 1000 (data collection circuit) is configured to acquire a product image and provide the acquired product image to the product defect detection device 2000.”; [0036])
a processing circuit coupled to the data collection circuit, the processing circuit having logic for: (“In the present embodiment, referring to FIG. 1, the product defect detection device 2000 may comprise a processor 2100, a memory 2200, an interface device 2300, a communication device 2400, a display device 2500, an input device 2600, a speaker 2700, a microphone 2800, etc.”; [0039])
The remainder of claim 14 is claim 1 in the form of a system and is rejected for the same reasons as claim 1 stated above.
Dependent claim 15 is claim 2 in the form of a system and is rejected for the same reasons claim 2 stated above. For the rejection of the limitations specifically pertaining to the system of claim 14, see the rejection of claim 14 above.
Dependent claim 16 is claim 3 in the form of a system and is rejected for the same reasons claim 3 stated above. For the rejection of the limitations specifically pertaining to the system of claim 14, see the rejection of claim 14 above.
Dependent claim 17 is claim 4 in the form of a system and is rejected for the same reasons claim 4 stated above. For the rejection of the limitations specifically pertaining to the system of claim 14, see the rejection of claim 14 above.
Dependent claim 20 is claim 7 in the form of a system and is rejected for the same reasons claim 7 stated above. For the rejection of the limitations specifically pertaining to the system of claim 14, see the rejection of claim 14 above.
Claims 6, 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Liu, Northcutt, Adler, Mayr and in further view of Zvitia et al. (US20160189055A1); hereinafter Zvitia
Claim 6 is rejected over Liu, Northcutt, Adler, Mayr and Zvitia with the incorporation of claim 1.
Regarding claim 6, Liu does not teach tuning the decision boundary based on a tuning threshold.
However, Zvitia teaches tuning the decision boundary based on a tuning threshold. (“Aspects of the present disclosure relate to tuning a classification system by defining certain performance measures as constraints and optimizing the confidence thresholds under the performance measures constraints.”; [0028] and “The operator of ADC machine 26 sets confidence thresholds, which determine the loci of the boundaries of the regions in feature space 40 that are associated with the defect classes. Setting the confidence threshold for multi-class classification is equivalent to placing borders 48 on either side of boundary 46. For example, the higher the confidence threshold, the farther apart will borders 48 be. The ADC machine rejects defects 51, which are located between borders 48 but within border 52, as “undecidable,” meaning that the machine cannot automatically assign these defects to one class or the other with the required level of confidence”; [0048])
It would have been obvious before the effective filing date to combine classification of manufacturing defects of Liu with the classification system tuning of Zvitia for improved performance of automated classification (Zvitia, [0003]). Liu and Zvitia are analogous art because they both concern a method of manufacturing defect classification.
Dependent claim 12 is claim 6 in the form of a system and is rejected for the same reasons claim 6 stated above. For the rejection of the limitations specifically pertaining to the system of claim 8, see the rejection of claim 8 above.
Dependent claim 19 is claim 6 in the form of a system and is rejected for the same reasons claim 6 stated above. For the rejection of the limitations specifically pertaining to the system of claim 14, see the rejection of claim 14 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TRAN whose telephone number is (703)756-1525. The examiner can normally be reached M-F 9:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H TRAN/Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147