Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/23/2025 has been entered.
Remarks
This Office Action is responsive to Applicants' Amendment filed on September 24, 2025, in which claims 1 and 13-15 are amended. No claims have been newly added or cancelled. Claims 1-19 are currently pending.
Response to Arguments
Applicant’s Remarks received 09/24/2025 were addressed in the Advisory Action mailed 10/09/2025. The rejections of claims 1-19 under 35 U.S.C. 101 were stated to be withdrawn contingent on the amendments to the claims being filed, and as such the rejections of claims 1-19 under 35 U.S.C. 101 are withdrawn. However, Examiner stated that Applicant’s arguments against the rejections of claims 1-19 under 35 U.S.C 103 were not persuasive, and as such the rejections are maintained. Please review the response dated 10/09/2025.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Claims 1 and 13-15 have been interpreted under 35 U.S.C. 112(f) because they use either the generic placeholder “controlling apparatus” coupled with the functional language “configured to”, in claims 1, 13, and 15, or the generic placeholder “a data processing apparatus” coupled with the functional language “adapted to”, in claim 14, without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier.
Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claim 1 and 13-15 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation:
For “controlling apparatus”: (Pg. 21) “The controlling apparatus comprises a sensor assembly adapted to acquire a data entity, the data entity being indicative of at least one property of a manufacturing of a respective product. The controlling apparatus further comprises a data processing apparatus adapted to classify the manufacturing of the product based on the data entity and a classification model. Furthermore, the controlling apparatus comprises a control interface adapted to output a control signal such that the at least one process parameter is adapted - in particular, changed - based on the classifying”.
For “a data processing apparatus”: (Pg. 27) “The data processing apparatus 108 is adapted to classify the manufacturing of the product based on the data entity and the classification model. So, in some modifications, the data processing apparatus 108 is adapted to capture an image of the current powder layer by the image capturing device such as a camera 102 and classifies the currently created powder layer with regard to the homogeneity of the powder layer by the classification model. Moreover, the data processing apparatus 108 is adapted to output by the control interface 106 a control signal such that the homogeneity of the current powder layer is changed on the classifying.”
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-6, 13, 14-17, are rejected under 35 U.S.C. 103 as being unpatentable over Mehr et al. (U.S. Patent Application Publication No. 2018/0341248), hereinafter Mehr, further in view of Gokalp (U.S. Patent Application Publication No. 2023/0376857), hereinafter Gokalp, further in view of Asar et al. (U.S. Patent Application Publication No. 2008/0133434), hereinafter Asar, further in view of Wang et al. “Cost-Effective Quality Assurance in Crowd Labeling”, hereinafter Wang.
Regarding claim 1,
Mehr teaches A computer-implemented method for training a classification model for controlling a manufacturing process ((Mehr Abstract) “Disclosed herein are machine learning-based methods and systems for automated object defect classification and adaptive, real-time control of additive manufacturing and/or welding processes”):
wherein products are manufactured according to at least one process parameter and wherein at least one property is indicative of the manufacturing of the products ((Mehr [0111]) “one or more process monitoring tools may be used to provide real-time data on process parameters or properties of the object being fabricated, both of which will be referred to herein as ‘process characterization data’”)
said method comprising: acquiring, by a sensor assembly of a controlling apparatus, ((Mehr [0125]) “The automated object defect classification methods will generally comprise: b) providing one or more sensors, wherein the one or more sensors provide real-time data for one or more object properties”) a set of data entities, each of the data entities being indicative of at least one property of a manufacturing of a respective product; ((Mehr [0145]) “The training data comprises a set of paired training examples, e.g., where each example comprises a set of defects detected for a given object and the resultant classification of the given object”, defects are a property of manufacturing a product)
receiving, by a data processing apparatus, the classification model from a data storage of the controlling apparatus configured to classify the manufacturing of the respective product and to control the manufacturing process; ((Mehr [0170]) “Some aspects of the methods and systems provided herein, such as the disclosed object defect classification or additive manufacturing process control algorithms, are implemented by way of machine (e.g., processor) executable code stored in an electronic storage location of the computer system, such as, for example, in the memory or electronic storage unit…In some cases, the code is retrieved from the storage unit and stored in the memory for ready access by the one or more processors”, the storage unit corresponds to a data storage of the controlling apparatus)
training the classification model using a training set, ((Mehr [0002]) “c) providing a predicted optimal set or sequence of one or more process control parameters for fabricating the object, wherein the predicted optimal set of one or more process control parameters are derived using a machine learning algorithm that has been trained using the training data set”)
wherein the classification model is an artificial intelligence model that classifies data entities into correct categories, ((Mehr [0007]) “In some embodiments, the object defects are detected as differences between object property data and a reference data set that are larger than a specified threshold, and are classified using a one-class support vector machine (SVM) or autoencoder algorithm”)
and wherein the training set comprises the data entities and the respective one or more labels; ((Mehr [0132]) “In some preferred embodiments, object defects may be detected and classified using an unsupervised one-class support vector machine (SVM), autoencoder, clustering, or nearest neighbor (e.g., kNN) machine learning algorithm and a training data set that comprises object property data for both defective and defect-free objects”)
Gokalp teaches the following further limitations that Mehr does not teach:
acquiring one or more labels for each of the data entities from an agent ((Gokalp [0045]) “the classification service may employ one or more label providers for a given classification problem, such as subject matter experts with respect to the problem domain, volunteers, or a group of individuals who have been identified via a web-based task marketplace (e.g., a web site at which individuals may register their interest in performing tasks such as labeling data items for a fee)”, label providers are agents)
training a labeling score model based on the data entities, the respective one or more labels acquired from the agent, a set of labeling metrics based on the acquiring from the agent ((Gokalp [0039]) “In at least some embodiments, at least two types of models may be trained iteratively: (a) a set of one or more models whose output with respect to a training set is used to select candidates for labeling feedback for subsequent training iterations (e.g., using an active learning algorithm which uses variance in predictions among the different models for a given data item)”, the model or set of models used to select candidates for labeling feedback corresponds to a labeling score model)
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr and Gokalp by taking the method for training a classification model to control a manufacturing process taught by Mehr, and combining it with the method for acquiring labels from agents taught by Gokalp, as obtaining labelled data is essential to the supervised training of machine learning classifiers, and more niche classification domains that may lack large and publicly-available datasets would benefit from the production of more labelled data, as models trained on larger amounts of data tend to be more accurate. Such a combination would be obvious.
Asar teaches the following further limitations that neither Mehr, nor Gokalp explicitly teaches:
and a classifier score obtained from a validation of the trained classification model using a validation set, ((Asar [0071]) “Validation dataset: Dataset used for validating the model during the learning phase and to estimate the prediction error for model selection”, prediction error of a model corresponds to a classifier score, a person of ordinary skill in the art understands a validation set to be used to validate the training of a model)
wherein the classifier score is a numerical measure of how well the trained classification model fits the validation set, ((Asar [0148]-[0149]) “We perform a single split and select a set of optimization parameters for training/validation. If this is a classification problem, then once training has been performed, we perform validation using multiple thresholds (assume T number of thresholds)…For each threshold value, we calculate validation error rate for that threshold as follows: errate=Sum(LF across all inputs in the validation set)/(Total number of element in the validation set)”, a validation error rate for a classification problem corresponds to a classifier score that is a numerical measure of how well a trained classification model fits a validation set)
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr, Gokalp, and Asar by taking the method for training a classification model to control a manufacturing process, including acquiring labels from agents, taught jointly by Mehr and Gokalp, and including the use of validation of the model on a validation dataset to determine the performance of the model, taught by Asar, as use of a validation dataset to validate machine learning models is a well-known technique within the art for getting early feedback on a model being trained to tune parameters, without exposing it to the test data set used to evaluate its ultimate performance. Such a combination would be obvious.
Wang teaches the following further limitation that neither Mehr, nor Gokalp, nor Asar teaches:
and wherein the labeling score model is an artificial intelligence model that generates and outputs a labeling score which is a numerical measure of an efficiency of a labeling process performed by the agent and of a quality of a label obtained from the agent ((Wang Pg. 6) “In this section, we describe several algorithms for inferring the true classes of objects and the quality of workers…Another advanced inference technique is EM, first proposed by Dawid and Skene (1979) in the context of medical diagnosis. The algorithm iterates until convergence, following two steps: (1) it estimates the true class for each object using the labels provided by a set of workers, accounting for the error rates of each worker; and (2) it estimates the error rates of each worker by comparing the submitted labels with estimated true class for each object…we propose a generative model of labels, abilities, and difficulties (GLAD) and use an EM approach to obtain the maximum likelihood estimates of the α(k) , β(o) , and t(o) for each worker (k) and each object (o)”, Wang Pg. 8, Algorithm 4 shows inference of an artificial intelligence model that generates scores for labels of objects and quality of workers that provide labels)
PNG
media_image1.png
433
777
media_image1.png
Greyscale
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr, Gokalp, Asar, and Wang by taking the method for training a classification model to control a manufacturing process, including acquiring labels from agents and validation on a validation set, taught jointly by Mehr, Gokalp, and Asar, and including a labelling score artificial intelligence model to create a score that evaluates the labelling efficiency of an agent and their label quality, taught by Wang, as Wang teaches: (Wang Pg. 19) “we introduce two novel metrics that can be used to objectively rank the performance of crowdsourced workers, both allowing employers to separate workers’ correctable errors from uncorrectable errors and incorporate unequal costs of different types of classification errors. In particular, the contributed value metric directly measures worker’s individual contribution in quality assurance through redundancy and provides a basis for employers to develop more fair and efficient compensation schemes”. Such a combination would be obvious.
Regarding claim 4,
Mehr, Gokalp, Asar, and Wang jointly teach The computer-implemented method of claim 1,
Gokalp further teaches:
wherein the set of labelling metrics comprises:
---a time span during which the one or more labels for the respective data entity have been acquired from the agent;
----a time span between acquiring one label and a further label from the agent;
----an amount of required energy;
----an effort for labeling a data entity;
----an importance score for a data entity;
----a count of labels for a data entity of the data entities;
----a count of labels for the set of data entities;
----a count of labels acquired from a group of agents;
----a count of labels acquired from the agent;
----a measure of the similarity between labels across the group;
----a measure of similarity of labels across multiple/different groups of agents; or
----an agent classification score ((Gokalp [0045]) “In at least some embodiments, as the training iterations proceed, the interactions with individual label providers may be analyzed, e.g., to determine which label providers are more proficient in identifying particular classes of data items, to determine the rate at which individual label providers are able to generate labels, and so on…one or more metrics pertaining to label submission by the label provider such as the rate at which labels are generated, a comparison of the labels with respect to predicted classes, and so on”, the rate of label generation by the label provider corresponds to a time span between acquiring one label and a further label from an agent)
At the time of filing, one of ordinary skill in the art would have motivation to combine the method jointly taught by Mehr, Gokalp, Asar, and Wang for the parent claim of claim 4, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim.
Regarding claim 13,
Mehr teaches A method for controlling a manufacturing process ((Mehr Abstract) “Disclosed herein are machine learning-based methods and systems for automated object defect classification and adaptive, real-time control of additive manufacturing and/or welding processes”):
wherein products are manufactured according to at least one process parameter and wherein at least one property is indicative of the manufacturing of the products ((Mehr [0111]) “one or more process monitoring tools may be used to provide real-time data on process parameters or properties of the object being fabricated, both of which will be referred to herein as ‘process characterization data’”)
comprising: acquiring, by a sensor assembly of a controlling apparatus, a data entity, the data entity being indicative of at least one property of a manufacturing of a respective product ((Mehr [0125]) “The automated object defect classification methods will generally comprise: b) providing one or more sensors, wherein the one or more sensors provide real-time data for one or more object properties”)
receiving, by a data processing apparatus, a classification model from a data storage of the controlling apparatus configured to classify the manufacturing of the product and to control the manufacturing process; ((Mehr [0170]) “Some aspects of the methods and systems provided herein, such as the disclosed object defect classification or additive manufacturing process control algorithms, are implemented by way of machine (e.g., processor) executable code stored in an electronic storage location of the computer system, such as, for example, in the memory or electronic storage unit…In some cases, the code is retrieved from the storage unit and stored in the memory for ready access by the one or more processors”, the storage unit corresponds to a data storage of the controlling apparatus)
classifying the manufacturing of the product based on the data entity and a classification model ((Mehr [0125]) “The automated object defect classification methods will generally comprise:…c) providing a processor programmed to provide a classification of detected object defects using a machine learning algorithm that has been trained using the training data set of step (a), wherein the real-time data from the one or more sensors is provided as input to the machine learning algorithm and allows the classification of detected object defects to be adjusted in real-time”, the machine learning algorithm used for classification corresponds to a classification model)
and adapting the at least one process parameter based on the classifying ((Mehr [0031]) “In some embodiments, in-process inspection data (e.g., automated defect classification data) may be used by the machine learning algorithm to determine a set or sequence of process control parameter adjustments that will implement a corrective action, e.g., to adjust a layer dimension or thickness, so as to correct the defect when first detected”, process control parameter adjustments correspond to adapting at least one process parameter)
training, by a computer-implemented method, the classification model, ((Mehr [0144]) “the machine learning algorithm(s) employed in the disclosed automated defect classification and additive manufacturing process control methods may comprise a supervised learning algorithm”)
wherein the classification model is an artificial intelligence model that classifies data entities into correct categories, ((Mehr [0007]) “In some embodiments, the object defects are detected as differences between object property data and a reference data set that are larger than a specified threshold, and are classified using a one-class support vector machine (SVM) or autoencoder algorithm”)
wherein said training the classification model comprises: providing a set of data entities, each of the data entities being indicative of at least one property of a manufacturing of a respective product ((Mehr [0145]) “The training data comprises a set of paired training examples, e.g., where each example comprises a set of defects detected for a given object and the resultant classification of the given object”)
and training the classification model, using a training set that comprises the data entities and the respective one or more labels ((Mehr [0144]) “the machine learning algorithm(s) employed in the disclosed automated defect classification and additive manufacturing process control methods may comprise a supervised learning algorithm”, (Mehr [0145]) “Supervised learning algorithms: In the context of the present disclosure, supervised learning algorithms are algorithms that rely on the use of a set of labeled training data to infer the relationship between a set of one or more defects identified for a given object and a classification of the object…The training data comprises a set of paired training examples, e.g., where each example comprises a set of defects detected for a given object and the resultant classification of the given object”, the classification is the label)
and controlling the manufacturing process, ((Mehr [0001]) “Also disclosed are methods and systems for performing real-time adaptive control of free form deposition or joining processes, including additive manufacturing or welding processes”)
wherein the manufacturing process manufactures products, ((Mehr [0001]) “Additive manufacturing processes are fabrication techniques that allow one to produce functional complex parts layer by layer, without the use of molds or dies”)
wherein said controlling is based on a classification of the data entities by the classification model into correct categories, ((Mehr [0031]) “in-process inspection data (e.g., automated defect classification data) may be used by the machine learning algorithm to determine a set or sequence of process control parameter adjustments that will implement a corrective action, e.g., to adjust a layer dimension or thickness, so as to correct the defect when first detected”)
and wherein said controlling increases a yield of the products manufactured ((Mehr [0001]) “Also disclosed are methods and systems for performing real-time adaptive control of free form deposition or joining processes, including additive manufacturing or welding processes, to improve process yield, throughput, and quality”)
Gokalp teaches the following further limitations that Mehr does not teach:
acquiring one or more labels for each of the data entities from an agent ((Gokalp [0045]) “the classification service may employ one or more label providers for a given classification problem, such as subject matter experts with respect to the problem domain, volunteers, or a group of individuals who have been identified via a web-based task marketplace (e.g., a web site at which individuals may register their interest in performing tasks such as labeling data items for a fee)”, label providers are agents)
determining a set of labeling metrics based on the acquiring from the agent ((Gokalp [0045]) “In at least some embodiments, as the training iterations proceed, the interactions with individual label providers may be analyzed, e.g., to determine which label providers are more proficient in identifying particular classes of data items, to determine the rate at which individual label providers are able to generate labels, and so on…one or more metrics pertaining to label submission by the label provider such as the rate at which labels are generated, a comparison of the labels with respect to predicted classes, and so on”, the metrics pertaining to label submission correspond to labeling metrics)
validating the trained classification model ((Gokalp [0064]) “a final (with respect to the current training iteration) classifier may also be trained in a given iteration, e.g., using all the labeled training data available, and the results obtained from the final-with-respect-to-the-current-iteration classifier on a test set may be used to evaluate whether quality-related training completion criteria have been met”, evaluating the final classifier on a test set corresponds to validating it) based on predefined criteria and yielding a classifier score ((Gokalp [0055]) “with respect to a subset of training metrics and associated training status, a set of diagnosis tests may be defined to help determine when the training procedure has met its overall objectives and should therefore be terminated. In effect, a given diagnosis test may provide a binary indicator of whether a given metric’s status has met a particular threshold condition for publishing or finalizing the classifier being trained”, the binary indicator of if a metric meets the threshold for finalizing the classifier corresponds to a classifier score, the set of diagnosis tests corresponds to predefined criteria)
training a labeling score model based on the data entities, the respective one or more labels acquired from the agent, the set of labeling metrics based on the acquiring from the agent, and the classifier score [obtained from said validating the trained classification model using a validation set] ((Gokalp [0039]) “In at least some embodiments, at least two types of models may be trained iteratively: (a) a set of one or more models whose output with respect to a training set is used to select candidates for labeling feedback for subsequent training iterations (e.g., using an active learning algorithm which uses variance in predictions among the different models for a given data item)”, the model or set of models used to select candidates for labeling feedback corresponds to a labeling score model, Asar teaches obtaining a classifier score from validating a trained classification model using a validation set more explicitly)
determining a labeling score for the agent based on the labeling score model and the respective one or more labels and set of labeling metrics ((Gokalp [0132]) “In scenarios in which multiple label providers are used, individual ones of the label providers may have differing capabilities and responsiveness characteristics---e.g., some label providers may be faster or otherwise superior to others with respect to identifying data items of particular classes, and so on…a classifier training subsystem 2402 may comprise, among other components, a label provider skills/capabilities detector 2404 implemented using one or more computing devices. Such a detector may, for example, keep track of how quickly different label providers such as 2420A, 2420B or 2420C respond to label feedback requests, the extent to which the labels provided by the different label providers 2420 tend to agree with the class predictions generated at the training subsystem, and so on. Using such metrics, respective profiles of the different label providers may be generated in at least some embodiments”, the metric of how often the label provider’s labels match that of the training subsystem’s corresponds to a labeling score for an agent)
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr and Gokalp by taking the method for using a classification model to control a manufacturing process, including training the model, taught by Mehr, and combining it with the method for acquiring labels from agents taught by Gokalp, as obtaining labelled data is essential to the supervised training of machine learning classifiers, and more niche classification domains that may lack large and publicly-available datasets would benefit from the production of more labelled data, as models trained on larger amounts of data tend to be more accurate. Such a combination would be obvious.
Asar teaches the following further limitations that neither Mehr, nor Gokalp explicitly teaches:
and the classifier score obtained from said validation of the trained classification model using a validation set, ((Asar [0071]) “Validation dataset: Dataset used for validating the model during the learning phase and to estimate the prediction error for model selection”, prediction error of a model corresponds to a classifier score, a person of ordinary skill in the art understands a validation set to be used to validate the training of a model)
wherein the classifier score is a numerical measure of how well the trained classification model fits the validation set, ((Asar [0148]-[0149]) “We perform a single split and select a set of optimization parameters for training/validation. If this is a classification problem, then once training has been performed, we perform validation using multiple thresholds (assume T number of thresholds)…For each threshold value, we calculate validation error rate for that threshold as follows: errate=Sum(LF across all inputs in the validation set)/(Total number of element in the validation set)”, a validation error rate for a classification problem corresponds to a classifier score that is a numerical measure of how well a trained classification model fits a validation set)
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr, Gokalp, and Asar by taking the method for using a classification model to control a manufacturing process, including training the model and acquiring labels from agents, taught jointly by Mehr and Gokalp, and including the use of validation of the model on a validation dataset to determine the performance of the model, taught by Asar, as use of a validation dataset to validate machine learning models is a well-known technique within the art for getting early feedback on a model being trained to tune parameters, without exposing it to the test data set used to evaluate its ultimate performance. Such a combination would be obvious.
Wang teaches the following further limitation that neither Mehr, nor Gokalp, nor Asar teaches:
and wherein the labeling score model is an artificial intelligence model that generates and outputs a labeling score which is a numerical measure of an efficiency of a labeling process performed by the agent and of a quality of a label obtained from the agent ((Wang Pg. 6) “In this section, we describe several algorithms for inferring the true classes of objects and the quality of workers…Another advanced inference technique is EM, first proposed by Dawid and Skene (1979) in the context of medical diagnosis. The algorithm iterates until convergence, following two steps: (1) it estimates the true class for each object using the labels provided by a set of workers, accounting for the error rates of each worker; and (2) it estimates the error rates of each worker by comparing the submitted labels with estimated true class for each object…we propose a generative model of labels, abilities, and difficulties (GLAD) and use an EM approach to obtain the maximum likelihood estimates of the α(k) , β(o) , and t(o) for each worker (k) and each object (o)”, Wang Pg. 8, Algorithm 4 shows inference of an artificial intelligence model that generates scores for labels of objects and quality of workers that provide labels)
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr, Gokalp, Asar, and Wang by taking the method for using a classification model to control a manufacturing process, including training the model, acquiring labels from agents, and validation on a validation set, taught jointly by Mehr, Gokalp, and Asar, and including a labelling score artificial intelligence model to create a score that evaluates the labelling efficiency of an agent and their label quality, taught by Wang, as Wang teaches: (Wang Pg. 19) “we introduce two novel metrics that can be used to objectively rank the performance of crowdsourced workers, both allowing employers to separate workers’ correctable errors from uncorrectable errors and incorporate unequal costs of different types of classification errors. In particular, the contributed value metric directly measures worker’s individual contribution in quality assurance through redundancy and provides a basis for employers to develop more fair and efficient compensation schemes”. Such a combination would be obvious.
Regarding claim 14,
Mehr teaches A controlling apparatus for controlling a manufacturing process ((Mehr Abstract) “Disclosed herein are machine learning-based methods and systems for automated object defect classification and adaptive, real-time control of additive manufacturing and/or welding processes”):
wherein products are manufactured by a manufacturing system according to at least one process parameter and wherein at least one property is indicative of the manufacturing of the products, ((Mehr [0111]) “one or more process monitoring tools may be used to provide real-time data on process parameters or properties of the object being fabricated, both of which will be referred to herein as ‘process characterization data’”)
the controlling apparatus comprising: a sensor assembly adapted to acquire a data entity, the data entity being indicative of at least one property of a manufacturing of a respective product; ((Mehr [0125]) “The automated object defect classification methods will generally comprise: b) providing one or more sensors, wherein the one or more sensors provide real-time data for one or more object properties”, (Mehr [0177]) “One or more automated inspection tools, e.g., machine vision systems coupled with automated image processing algorithms, are used to monitor and measure feature dimensions, angles, surface finishes, and/or other properties of fabricated parts both in-process and post-build. Defects may be identified…and classified”, a machine vision system with image processing to classify defects corresponds to a data processing apparatus that captures and classifies images)
and a data processing apparatus adapted to classify the manufacturing of the product based on the data entity and a classification model; ((Mehr [0125]) “The automated object defect classification methods will generally comprise:…c) providing a processor programmed to provide a classification of detected object defects using a machine learning algorithm that has been trained using the training data set of step (a), wherein the real-time data from the one or more sensors is provided as input to the machine learning algorithm and allows the classification of detected object defects to be adjusted in real-time”)
wherein the data processing apparatus is further adapted to receive the classification model from a data storage of the controlling apparatus, onto which the classification model is stored, or from a distributed database ((Mehr [0170]) “Some aspects of the methods and systems provided herein, such as the disclosed object defect classification or additive manufacturing process control algorithms, are implemented by way of machine (e.g., processor) executable code stored in an electronic storage location of the computer system, such as, for example, in the memory or electronic storage unit…In some cases, the code is retrieved from the storage unit and stored in the memory for ready access by the one or more processors”, the storage unit corresponds to a data storage of the controlling apparatus)
said classification model being generated, trained, and validated by a computer-implemented method comprising: providing a set of data entities, each of the data entities being indicative of at least one property of a manufacturing of a respective product ((Mehr [0145]) “The training data comprises a set of paired training examples, e.g., where each example comprises a set of defects detected for a given object and the resultant classification of the given object”)
training the classification model using a training set, ((Mehr [0144]) “the machine learning algorithm(s) employed in the disclosed automated defect classification and additive manufacturing process control methods may comprise a supervised learning algorithm”, supervised learning algorithms use a training set)
wherein the classification model is an artificial intelligence model that classifies data entities into correct categories, ((Mehr [0007]) “In some embodiments, the object defects are detected as differences between object property data and a reference data set that are larger than a specified threshold, and are classified using a one-class support vector machine (SVM) or autoencoder algorithm”)
and wherein the training set comprises the data entities and the respective one or more labels; ((Mehr [0132]) “In some preferred embodiments, object defects may be detected and classified using an unsupervised one-class support vector machine (SVM), autoencoder, clustering, or nearest neighbor (e.g., kNN) machine learning algorithm and a training data set that comprises object property data for both defective and defect-free objects”)
Gokalp teaches the following further limitations that Mehr does not teach:
acquiring one or more labels for each of the data entities from an agent ((Gokalp [0045]) “the classification service may employ one or more label providers for a given classification problem, such as subject matter experts with respect to the problem domain, volunteers, or a group of individuals who have been identified via a web-based task marketplace (e.g., a web site at which individuals may register their interest in performing tasks such as labeling data items for a fee)”)
determining a set of labeling metrics based on the acquiring from the agent ((Gokalp [0045]) “In at least some embodiments, as the training iterations proceed, the interactions with individual label providers may be analyzed, e.g., to determine which label providers are more proficient in identifying particular classes of data items, to determine the rate at which individual label providers are able to generate labels, and so on…one or more metrics pertaining to label submission by the label provider such as the rate at which labels are generated, a comparison of the labels with respect to predicted classes, and so on”)
validating the trained classification model ((Gokalp [0064]) “a final (with respect to the current training iteration) classifier may also be trained in a given iteration, e.g., using all the labeled training data available, and the results obtained from the final-with-respect-to-the-current-iteration classifier on a test set may be used to evaluate whether quality-related training completion criteria have been met”) based on predefined criteria and yielding a classifier score ((Gokalp [0055]) “with respect to a subset of training metrics and associated training status, a set of diagnosis tests may be defined to help determine when the training procedure has met its overall objectives and should therefore be terminated. In effect, a given diagnosis test may provide a binary indicator of whether a given metric’s status has met a particular threshold condition for publishing or finalizing the classifier being trained”)
training a labeling score model based on the data entities, the respective one or more labels, the sets of labeling metrics based on the acquiring from the agent, ((Gokalp [0039]) “In at least some embodiments, at least two types of models may be trained iteratively: (a) a set of one or more models whose output with respect to a training set is used to select candidates for labeling feedback for subsequent training iterations (e.g., using an active learning algorithm which uses variance in predictions among the different models for a given data item)”)
and determining a labeling score for the agent based on the labeling score model and the respective one or more labels and set of labeling metrics ((Gokalp [0132]) “In scenarios in which multiple label providers are used, individual ones of the label providers may have differing capabilities and responsiveness characteristics---e.g., some label providers may be faster or otherwise superior to others with respect to identifying data items of particular classes, and so on…a classifier training subsystem 2402 may comprise, among other components, a label provider skills/capabilities detector 2404 implemented using one or more computing devices. Such a detector may, for example, keep track of how quickly different label providers such as 2420A, 2420B or 2420C respond to label feedback requests, the extent to which the labels provided by the different label providers 2420 tend to agree with the class predictions generated at the training subsystem, and so on. Using such metrics, respective profiles of the different label providers may be generated in at least some embodiments”, broadest reasonable interpretation of a labeling score includes a speed at which labeling requests are fulfilled or an extent to which labels provided are correct)
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr and Gokalp by taking the controlling apparatus for controlling a manufacturing process, wherein a classification model is a component, and wherein the model is trained, taught by Mehr, and combining it with the method for acquiring labels from agents taught by Gokalp, as obtaining labelled data is essential to the supervised training of machine learning classifiers, and more niche classification domains that may lack large and publicly-available datasets would benefit from the production of more labelled data, as models trained on larger amounts of data tend to be more accurate. Such a combination would be obvious.
Asar teaches the following further limitations that neither Mehr, nor Gokalp explicitly teaches:
and the classifier score obtained from said validating the trained classification model using a validation set, ((Asar [0071]) “Validation dataset: Dataset used for validating the model during the learning phase and to estimate the prediction error for model selection”, prediction error of a model corresponds to a classifier score, a person of ordinary skill in the art understands a validation set to be used to validate the training of a model)
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr, Gokalp, and Asar by taking the controlling apparatus for controlling a manufacturing process, wherein a classification model is a component, and including acquiring labels for the classification model from agents, taught jointly by Mehr and Gokalp, and including the use of validation of the model on a validation dataset to determine the performance of the model, taught by Asar, as use of a validation dataset to validate machine learning models is a well-known technique within the art for getting early feedback on a model being trained to tune parameters, without exposing it to the test data set used to evaluate its ultimate performance. Such a combination would be obvious.
Wang teaches the following further limitation that neither Mehr, nor Gokalp, nor Asar teaches:
wherein the labeling score model is an artificial intelligence model that generates and outputs a labeling score which is a numerical measure of an efficiency of a labeling process performed by the agent and of a quality of a label obtained from the agent ((Wang Pg. 6) “In this section, we describe several algorithms for inferring the true classes of objects and the quality of workers…Another advanced inference technique is EM, first proposed by Dawid and Skene (1979) in the context of medical diagnosis. The algorithm iterates until convergence, following two steps: (1) it estimates the true class for each object using the labels provided by a set of workers, accounting for the error rates of each worker; and (2) it estimates the error rates of each worker by comparing the submitted labels with estimated true class for each object…we propose a generative model of labels, abilities, and difficulties (GLAD) and use an EM approach to obtain the maximum likelihood estimates of the α(k) , β(o) , and t(o) for each worker (k) and each object (o)”, Wang Pg. 8, Algorithm 4 shows inference of an artificial intelligence model that generates scores for labels of objects and quality of workers that provide labels)
At the time of filing, one of ordinary skill in the art would have motivation to combine Mehr, Gokalp, Asar, and Wang by taking the controlling apparatus for controlling a manufacturing process, wherein a classification model is a component, and including acquiring labels for the classification model from agents and validation of the model on a validation set, taught jointly by Mehr, Gokalp, and Asar, and including a labelling score artificial intelligence model to create a score that evaluates the labelling efficiency of an agent and their label quality, taught by Wang, as Wang teaches: (Wang Pg. 19) “we introduce two novel metrics that can be used to objectively rank the performance of crowdsourced workers, both allowing employers to separate workers’ correctable errors from uncorrectable errors and incorporate unequal costs of different types of classification errors. In particular, the contributed value metric directly measures worker’s individual contribution in quality assurance through redundancy and provides a basis for employers to develop more fair and effici