DETAILED ACTION
This office action is in response to submission of application on 11/23/2022.
Claims 1-17 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 11/23/2022, 09/27/2023, and 04/23/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitation(s) is/are:
an input unit in claims 1 and 9.
a processing unit in claims 1, 9, and 16-17.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112(a)
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Specifically, the following claim limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
an input unit in claims 1 and 9.
a processing unit in claims 1, 9, and 16-17.
However, while the specification mentions the configuration of an input unit and a processing unit as part of an industrial process model generation system (e.g., paragraphs 0005-0006), it is silent as to the structure of these ‘units’. According to MPEP 2181(II)(B), "When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a)."
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The following claim limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
an input unit in claims 1 and 9.
a processing unit in claims 1, 9, and 16-17.
However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Specifically, while the specification mentions the configuration of an input unit and a processing unit as part of an industrial process model generation system (e.g., paragraphs 0005-0006), it is silent as to the structure of these ‘units’. Therefore, the independent claims 1, 9, and 16-17 and their dependent claims 2-8 and 10-15 are indefinite and are rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
For examination purposes, the input unit and the processing unit will be interpreted as computer-implemented functional blocks of code for performing their associated steps.
Applicant may:
(a) Amend the claims so that the claim limitations will no longer be interpreted as limitations under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claims, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1:
Step 1: The claim is directed to a system, which falls within the statutory category of a machine/manufacture.
Step 2A Prong 1: The claim is directed to an abstract idea. Specifically, the claim recites:
wherein, the processing unit is configured to generate a plurality of industrial process behavioral data, wherein industrial process behavioral data is generated for at least some of the plurality of input value trajectories, and wherein the generation of the industrial process behavioral data for the at least some of the plurality of input value trajectories comprises utilization of the simulator; (Abstract idea – mental process. Generating industrial process behavioral data based on input value trajectories can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing the input value trajectories on a sheet of paper, mentally simulating the industrial process, and writing out indications of predicted process behavior by hand. The courts have recognized that claims can recite a mental process even if they are claimed as being performed on a computer. See MPEP 2106.04(a)(2)(III).)
wherein, the processing unit is configured to determine to train or not to train the machine learning algorithm using the first behavioral data, the determination comprising a comparison of the first modelled result with a performance condition; (Abstract idea – mental process. Determining whether or not to train the model based on a comparison of the model output and a performance condition can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing the model’s predicted output on a display, mentally evaluating the accuracy of the prediction, mentally comparing the accuracy to an accuracy threshold, and mentally determining whether or not to train the model based on the comparison. See MPEP 2106.04(a)(2)(III).)
wherein, the processing unit is configured to determine to train or not to train the machine learning algorithm using the second behavioral data or to further train or not to further train the machine learning algorithm using the second behavioral data, the determination comprising a comparison of the second modelled result with the performance condition. (Abstract idea – mental process. Determining whether or not to train or further train the model based on a comparison of the model output and a performance condition can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing the model’s predicted output on a display, mentally evaluating the accuracy of the prediction, mentally comparing the accuracy to an accuracy threshold, and mentally determining whether or not to train the model based on the comparison. See MPEP 2106.04(a)(2)(III).)
Step 2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination. Specifically, the claim recites the additional elements:
an input unit; (This limitation is interpreted as implementation of the disclosed steps in a computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
a processing unit; (This limitation is interpreted as implementation of the disclosed steps in a computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the input unit is configured to receive a plurality of input value trajectories comprising operational input value trajectories and simulation input value trajectories relating to an industrial process; (Receiving input value trajectories amounts to adding insignificant extra-solution activity (necessary data gathering) to the judicial exception – see MPEP2106.05(g).)
wherein, the processing unit is configured to implement a simulator of the industrial process; (Implementing simulation using the processing unit amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to implement a machine learning algorithm that models the industrial process; (Implementation of a generic machine learning algorithm is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to train the machine learning algorithm; (Training a generic machine learning algorithm is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to process a first behavioral data of the plurality of behavioral data with the machine learning algorithm to determine a first modelled result; (Using a generic machine learning algorithm to process input data and obtain a result is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to process a second behavioral data of the plurality of behavioral data with the machine learning algorithm to determine a second modelled result; (Using a generic machine learning algorithm to process input data and obtain a result is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Specifically, the claim recites the additional elements:
an input unit; (This limitation is interpreted as implementation of the disclosed steps in a computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
a processing unit; (This limitation is interpreted as implementation of the disclosed steps in a computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the input unit is configured to receive a plurality of input value trajectories comprising operational input value trajectories and simulation input value trajectories relating to an industrial process; (Receiving input value trajectories amounts to adding insignificant extra-solution activity (necessary data gathering) to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d).)
wherein, the processing unit is configured to implement a simulator of the industrial process; (Implementing simulation using the processing unit amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to implement a machine learning algorithm that models the industrial process; (Implementation of a generic machine learning algorithm is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to train the machine learning algorithm; (Training a generic machine learning algorithm is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to process a first behavioral data of the plurality of behavioral data with the machine learning algorithm to determine a first modelled result; (Using a generic machine learning algorithm to process input data and obtain a result is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to process a second behavioral data of the plurality of behavioral data with the machine learning algorithm to determine a second modelled result; (Using a generic machine learning algorithm to process input data and obtain a result is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Claims 2-8:
Claim 2 recites The system according to claim 1, wherein the plurality of input value trajectories comprises one or more of: process data; temperature data; pressure data; flow data; level data; voltage data; current data; power data; actuator data; valve data; sensor data; and controller data. This claim merely qualifies the type of data received by the input unit, and thus amounts to adding insignificant extra-solution activity to the judicial exception – see MPEP2106.05(g) – which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d). Therefore, the claim does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 3 recites The system according to claim 1, wherein the determination to train or not to train the machine learning algorithm using the first behavioral data comprises a determination of a sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. Determining whether to train the model based on a determination of the model’s sensitivity to a portion of input data can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally comparing the model’s predicted output for the portion of input data to the expected output, mentally determining a measure of sensitivity based on the comparison, and if the sensitivity meets a threshold, mentally determining that training should be performed. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 4 recites The system according to claim 1, wherein the determination to train or not to train the machine learning algorithm using the second behavioral data, comprises a determination of the sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. Determining whether to train the model based on a determination of the model’s sensitivity to a portion of input data can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally comparing the model’s predicted output for the portion of input data to the expected output, mentally determining a measure of sensitivity based on the comparison, and if the sensitivity meets a threshold, mentally determining that training should be performed. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 5 recites The system according to claim 1, wherein the determination to further train or not to further train the machine learning algorithm using the second behavioral data, comprises a determination of the sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. Determining whether to further train the model based on a determination of the model’s sensitivity to a portion of input data can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally comparing the model’s predicted output for the portion of input data to the expected output, mentally determining a measure of sensitivity based on the comparison, and if the sensitivity meets a threshold, mentally determining that further training should be performed. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 6 recites The system according to claim 4, wherein the determination of the sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data comprises an analysis of a loss function of the trained machine learning algorithm with respect to at least the portion of the plurality of behavioral data. Determining sensitivity based on analysis of a loss function can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally calculating a loss value for the portion of input data based on its predicted and expected output data using a simple loss function, and mentally determining this loss value to be a measure of sensitivity. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 4, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 7 recites The system according to claim 1, wherein the processing unit is configured to determine to stop training of the machine learning algorithm, the determination comprising a determination of a sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. Determining to stop model training based on a determination of the model’s sensitivity to a portion of input data can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally comparing the model’s predicted output for the portion of input data to the expected output, mentally determining a measure of sensitivity based on the comparison, and if the sensitivity meets a threshold, mentally determining that training should be stopped. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 8 recites The system according to claim 1, wherein the processing unit is configured to select the at least some of the plurality of input value trajectories. Selecting input value trajectories can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the input value trajectories on a sheet of paper and mentally determining trajectories to be selected. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 1, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 9:
Step 1: The claim is directed to a system, which falls within the statutory category of a machine/manufacture.
Step 2A Prong 1: The claim is directed to an abstract idea. Specifically, the claim recites:
wherein, the processing unit is configured to generate a plurality of industrial process behavioral data, wherein the industrial process behavioral data is generated for at least some of the plurality of input value trajectories, and wherein the generation of the industrial process behavioral data for the at least some of the plurality of input value trajectories comprises utilization of the simulator; (Abstract idea – mental process. Generating industrial process behavioral data based on input value trajectories can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing the input value trajectories on a sheet of paper, mentally simulating the industrial process, and writing out indications of predicted process behavior by hand. The courts have recognized that claims can recite a mental process even if they are claimed as being performed on a computer. See MPEP 2106.04(a)(2)(III).)
wherein, the processing unit is configured to determine to train the first machine learning algorithm using the first behavioral data or implement a second machine learning algorithm of the plurality of machine learning algorithms, and wherein the determination comprises a comparison of the first machine learning algorithm first modelled result with a performance condition. (Abstract idea – mental process. Determining whether to train a first model or implement a second model based on a comparison of the model output and a performance condition can practically be performed in the human mind or with the aid of pen and paper, for example, by viewing the first model’s predicted output on a display, mentally evaluating the accuracy of the prediction, mentally comparing the accuracy to an accuracy threshold, and mentally determining whether to train the model or implement another model based on the comparison. See MPEP 2106.04(a)(2)(III).)
Step 2A Prong 2: The additional elements recited in the claim do not integrate the abstract idea into a practical application, individually or in combination. Specifically, the claim recites the additional elements:
an input unit; (This limitation is interpreted as implementation of the disclosed steps in a computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
a processing unit; (This limitation is interpreted as implementation of the disclosed steps in a computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the input unit is configured to receive a plurality of input value trajectories comprising operational input value trajectories and simulation input value trajectories; (Receiving input value trajectories amounts to adding insignificant extra-solution activity (necessary data gathering) to the judicial exception – see MPEP2106.05(g).)
wherein, the processing unit is configured to implement a simulator of the industrial process; (Implementing simulation using the processing unit amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to implement a plurality of machine learning algorithm that model the industrial process; (Implementation of generic machine learning algorithms is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to process a first behavioral data of the plurality of behavioral data with a first machine learning algorithm of the plurality of machine learning algorithms to determine a first machine learning algorithm first modelled result; (Using a generic machine learning algorithm to process input data and obtain a result is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Specifically, the claim recites the additional elements:
an input unit; (This limitation is interpreted as implementation of the disclosed steps in a computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
a processing unit; (This limitation is interpreted as implementation of the disclosed steps in a computing environment, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the input unit is configured to receive a plurality of input value trajectories comprising operational input value trajectories and simulation input value trajectories; (Receiving input value trajectories amounts to adding insignificant extra-solution activity (necessary data gathering) to the judicial exception – see MPEP2106.05(g). Further, the limitation is directed to receiving or transmitting data over a network, which the courts have found to be well-understood, routine, and conventional in the computer arts – see MPEP 2106.05(d).)
wherein, the processing unit is configured to implement a simulator of the industrial process; (Implementing simulation using the processing unit amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to implement a plurality of machine learning algorithm that model the industrial process; (Implementation of generic machine learning algorithms is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
wherein, the processing unit is configured to process a first behavioral data of the plurality of behavioral data with a first machine learning algorithm of the plurality of machine learning algorithms to determine a first machine learning algorithm first modelled result; (Using a generic machine learning algorithm to process input data and obtain a result is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).)
Claims 10-15:
Claim 10 recites The system according to claim 9, wherein the processing unit is configured to process a second behavioral data of the plurality of behavioral data with the first machine learning algorithm to determine a first machine learning algorithm second modelled result; and wherein the processing unit is configured to determine to train the first machine learning algorithm using the second behavioral data or implement the second machine learning algorithm, wherein the determination comprising a comparison of the first machine learning algorithm second modelled result with the performance condition. Using a generic machine learning algorithm to process input data and obtain a result is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Determining whether to train a first model or implement a second model based on a comparison of the model output and a performance condition can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the first model’s predicted output on a display, mentally evaluating the accuracy of the prediction, mentally comparing the accuracy to an accuracy threshold, and mentally determining whether to train the model or implement another model based on the comparison. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 9, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 11 recites The system according to claim 9, wherein the processing unit is configured to process the first behavioral data with the second machine learning algorithm to determine a second machine learning algorithm first modelled result; and wherein the processing unit is configured to determine to train the second machine learning algorithm using the first behavioral data or implement a third machine learning algorithm of the plurality of machine learning algorithms, wherein the determination comprises a comparison of the second machine learning algorithm first modelled result with the performance condition. Using a generic machine learning algorithm to process input data and obtain a result is standard in the field of machine learning, and thus amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea – see MPEP 2106.05(f). Determining whether to train a second model or implement a third model based on a comparison of the model output and a performance condition can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the second model’s predicted output on a display, mentally evaluating the accuracy of the prediction, mentally comparing the accuracy to an accuracy threshold, and mentally determining whether to train the model or implement another model based on the comparison. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 9, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 12 recites The system according to claim 9, wherein the determination to train the existing machine learning algorithm using behavioral data or implement a new machine learning algorithm comprises a determination of a sensitivity of the existing machine learning algorithm to at least a portion of the plurality of behavioral data. Determining whether to train the model or implement another model based on a determination of the model’s sensitivity to a portion of input data can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally comparing the model’s predicted output for the portion of input data to the expected output, mentally determining a measure of sensitivity based on the comparison, and based on whether the sensitivity meets a threshold, mentally determining that training should be performed or another model should be implemented. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 9, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 13 recites The system according to claim 9, wherein the processing unit is configured to determine to stop training of the existing machine learning algorithm, wherein the determination comprises a determination of a sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. Determining to stop model training based on a determination of the model’s sensitivity to a portion of input data can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally comparing the model’s predicted output for the portion of input data to the expected output, mentally determining a measure of sensitivity based on the comparison, and if the sensitivity meets a threshold, mentally determining that training should be stopped. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 9, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 14 recites The system according to claim 12, wherein the determination of the sensitivity of the trained machine learning algorithm to at least the portion of the plurality of behavioral data comprises an analysis of a loss function of the trained machine learning algorithm with respect to at least the portion of the plurality of behavioral data. Determining sensitivity based on analysis of a loss function can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by mentally calculating a loss value for the portion of input data based on its predicted and expected output data using a simple loss function, and mentally determining this loss value to be a measure of sensitivity. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 12, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 15 recites The system according to claim 9, wherein the processing unit is configured to select the at least some of the plurality of input value trajectories. Selecting input value trajectories can practically be performed in the human mind or with the aid of pen and paper (i.e. mental process), for example, by viewing the input value trajectories on a sheet of paper and mentally determining trajectories to be selected. See MPEP 2106.04(a)(2)(III). Therefore, the claim merges with the abstract idea recited in claim 9, and does not recite additional elements that are sufficient to amount to significantly more than the abstract idea.
Claim 16 is a method claim containing substantially the same elements as system claim 1, and is rejected on the same grounds under 35 U.S.C. 101 as claim 1, mutatis mutandis.
Claim 17 is a method claim containing substantially the same elements as system claim 9, and is rejected on the same grounds under 35 U.S.C. 101 as claim 9, mutatis mutandis.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over
Alexopoulos et al. (hereinafter Alexopoulos), “Digital twin-driven supervised machine learning for the development of artificial intelligence applications in manufacturing” (published 04/08/2020) in view of
Burger et al. (hereinafter Burger), U.S. Patent Application Publication US-20200265301-A1 (filed 02/15/2019).
Regarding Claim 1,
Alexopoulos teaches An industrial process model generation system, comprising:
an input unit; and (Pg. 4, figure 1 shows “[u]tilization of the digital twin for the development of ML-based applications for smart manufacturing.” The digital twin’s “Communication Layer” (i.e. input unit) is shown in the right column.)
a processing unit; (Pg. 4, figure 1 shows “[u]tilization of the digital twin for the development of ML-based applications for smart manufacturing.” The digital twin’s “Information Layer” and “Services Layer” (i.e. processing unit) are shown in the right column.)
wherein, the input unit is configured to receive a plurality of input value trajectories comprising operational input value trajectories and simulation input value trajectories relating to an industrial process; (Pg. 4-5, section 2: “[T]he virtual model is timely combined with status datasets generated in the real environment (e.g. IoT data, image data, machine data and more) and can become available to the DT [digital twin] model through the underlying communication layer consisting of several protocols, such as HTTP and MQTT… a highly realistic DT is built and maintained by combining data captured through IoT technology (e.g. cameras, sensors), together with engineering data (e.g. kinematics, 3D geometry) from digital factory tools.” Pg. 5, section 3: “[T]he proposed concept can be applied in an industrially relevant use case” The communication layer (i.e. input unit) receives data captured through IoT technology (i.e. simulation input value trajectories) and data from digital factory tools (i.e. operational input value trajectories) relating to an industrial process.)
wherein, the processing unit is configured to implement a simulator of the industrial process; (Pg. 4, figure 1 shows “[u]tilization of the digital twin for the development of ML-based applications for smart manufacturing.” The digital twin’s “Services Layer” (i.e. processing unit), shown in the right column, includes a “Simulation” service.)
wherein, the processing unit is configured to generate a plurality of industrial process behavioral data, wherein industrial process behavioral data is generated for at least some of the plurality of input value trajectories, and wherein the generation of the industrial process behavioral data for the at least some of the plurality of input value trajectories comprises utilization of the simulator; (Pg. 5, section 2: “The DT framework described above may support the generation and labelling of virtually created datasets, via means of dynamic simulations over high fidelity and realistic DT models, for training ML models. Such virtually created training datasets are generated by a chain of Simulation and Dataset Generation and Labelling services acting upon the DT Model and Data. The Simulation service provides as an output a high-fidelity behaviour of a system (for example, a detailed and photorealistic animation of an industrial robotic cell environment).” The simulation service generates system behavior data (i.e. industrial process behavioral data) based on the DT data (i.e. input value trajectories).)
wherein, the processing unit is configured to implement a machine learning algorithm that models the industrial process; (Pg. 4, figure 1 shows “[u]tilization of the digital twin for the development of ML-based applications for smart manufacturing.” The digital twin’s “Information Layer” and “Services Layer” (i.e. processing unit) include an “ML model” and “Training” and “ML Model Provider” services (i.e. implementation of the ML algorithm).)
wherein, the processing unit is configured to train the machine learning algorithm; (Pg. 5, section 2: “The labelled training datasets can be used by the Training service for training the ML model. The Training service is configured by ML experts. The result of the Training service is a properly trained ML model, for example, an ANN.”)
Alexopoulos does not appear to explicitly disclose the remaining claim 1 limitations.
However, Burger teaches wherein, the processing unit is configured to process a first behavioral data of the plurality of behavioral data with the machine learning algorithm to determine a first modelled result; (0002: “Technology related to incremental training of machine learning tools is disclosed… The machine learning tool can be a deep neural network. Input data can be applied to the machine learning tool to generate an output of the machine learning tool.” Machine learning input data (i.e. first behavioral data) is fed into the machine learning algorithm to determine a machine learning output (i.e. a first modelled result).)
wherein, the processing unit is configured to determine to train or not to train the machine learning algorithm using the first behavioral data, the determination comprising a comparison of the first modelled result with a performance condition; (0002: “A measure of prediction quality can be generated for the output of the machine learning tool. In response to determining the measure of prediction quality is below a threshold, incremental training of the operational parameters can be initiated using the input data as training data for the machine learning tool.” The quality of the model’s prediction (i.e. the first modelled result) is compared to a quality threshold (i.e. a performance condition) to determine whether to train the model using the input data (i.e. first behavioral data) as training data.)
wherein, the processing unit is configured to process a second behavioral data of the plurality of behavioral data with the machine learning algorithm to determine a second modelled result; and (Examiner notes that this limitation is identical to the limitation above except that it operates on a second input to determine a second model output. The training method disclosed by Burger is iterative, and thus includes processing multiple inputs to generate multiple outputs. See, e.g., 0023: “As described herein, the accuracy of the DNN model can potentially be improved by selectively using input data collected by the edge devices to incrementally train the DNN model.”)
wherein, the processing unit is configured to determine to train or not to train the machine learning algorithm using the second behavioral data or to further train or not to further train the machine learning algorithm using the second behavioral data, the determination comprising a comparison of the second modelled result with the performance condition. (Examiner notes that this limitation is identical to the limitation above except that it analyzes the second model output to determine whether to train using the second input. The training method disclosed by Burger is iterative, and thus includes analyzing multiple outputs to make multiple training determinations. See, e.g., 0023: “As described herein, the accuracy of the DNN model can potentially be improved by selectively using input data collected by the edge devices to incrementally train the DNN model.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Alexopoulos and Burger. Alexopoulos teaches using a digital twin-based simulator to generate machine learning model training data for industrial manufacturing applications. Burger teaches incremental training of a machine learning model, where the model is only trained on new data when its performance on the new data is unsatisfactory. One of ordinary skill would have motivation to combine Alexopoulos and Burger because “[t]his approach can potentially: improve an overall accuracy of the deployed DNN model; reduce a communication workload between edge devices and the server computer; reduce a cost of data labeling by reducing or minimizing redundancy and/or repetition in the training data; reduce a DNN retraining cost by only processing more informative samples; and recycle unlabeled data collected on the edge devices by looking for informative samples” (Burger, 0024).
Regarding Claim 2, Alexopoulos and Burger teach The system according to claim 1, as shown above.
Alexopoulos also teaches wherein the plurality of input value trajectories comprises one or more of: process data; temperature data; pressure data; flow data; level data; voltage data; current data; power data; actuator data; valve data; sensor data; and controller data. (Pg. 5, section 2: “[A] highly realistic DT is built and maintained by combining data captured through IoT technology (e.g. cameras, sensors), together with engineering data (e.g. kinematics, 3D geometry) from digital factory tools.” The input data (i.e. input value trajectories) received at the communication layer (i.e. input unit) includes sensor data.)
Regarding Claim 3, Alexopoulos and Burger teach The system according to claim 1, as shown above.
Burger also teaches wherein the determination to train or not to train the machine learning algorithm using the first behavioral data comprises a determination of a sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. (0002: “A measure of prediction quality can be generated for the output of the machine learning tool. In response to determining the measure of prediction quality is below a threshold, incremental training of the operational parameters can be initiated using the input data as training data for the machine learning tool.” The determination to train or not to train using input data (i.e. first behavioral data) is based on a measure of model prediction quality (i.e. sensitivity) on the input data (i.e. a portion of behavioral data).)
Regarding Claim 4, Alexopoulos and Burger teach The system according to claim 1, as shown above.
Burger also teaches wherein the determination to train or not to train the machine learning algorithm using the second behavioral data, comprises a determination of the sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. (See the portion of 0002 cited above in regard to claim 3. The determination to train or not to train using input data (i.e. second behavioral data) is based on a measure of model prediction quality (i.e. sensitivity) on the input data (i.e. a portion of behavioral data).)
Regarding Claim 5, Alexopoulos and Burger teach The system according to claim 1, as shown above.
Burger also teaches wherein the determination to further train or not to further train the machine learning algorithm using the second behavioral data, comprises a determination of the sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. (See the portion of 0002 cited above in regard to claim 3. The determination to further train or not to further train using input data (i.e. second behavioral data) is based on a measure of model prediction quality (i.e. sensitivity) on the input data (i.e. a portion of behavioral data).)
Regarding Claim 6, Alexopoulos and Burger teach The system according to claim 4, as shown above.
Burger also teaches wherein the determination of the sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data comprises an analysis of a loss function of the trained machine learning algorithm with respect to at least the portion of the plurality of behavioral data. (0037-0039: “The quality analyzer 180 can also determine a quality of the results from the machine learning tool 170 in an unsupervised manner using mathematical and/or statistical properties of outputs of the machine learning tool 170. For example, the machine learning tool 170 can include a deep neural network and a perplexity of the outputs of the last layer can be used to determine the quality of the results. Perplexity can be calculated in various ways, but perplexity is a measure of a variability of a prediction model and/or a measure of prediction error… In some examples, a log of the perplexity value can be used for simplification. At a given training epoch, a low log-perplexity value implies that the sample is a typical sample and that the neural network model is not “surprised” with the particular sample. In other words, the sample has a relatively low loss value.” The determination of model result quality (i.e. sensitivity) can be based on an analysis of the perplexity value or loss value of the input data (i.e. portion of behavioral data).)
Regarding Claim 7, Alexopoulos and Burger teach The system according to claim 1, as shown above.
Burger also teaches wherein the processing unit is configured to determine to stop training of the machine learning algorithm, the determination comprising a determination of a sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. (0002: “A measure of prediction quality can be generated for the output of the machine learning tool. In response to determining the measure of prediction quality is below a threshold, incremental training of the operational parameters can be initiated using the input data as training data for the machine learning tool.” The determination not to train (i.e. to stop training) is based on a measure of model prediction quality (i.e. sensitivity) on the input data (i.e. a portion of behavioral data).)
Regarding Claim 8, Alexopoulos and Burger teach The system according to claim 1, as shown above.
Alexopoulos also teaches wherein the processing unit is configured to select the at least some of the plurality of input value trajectories. (Pg. 4-5, section 2: “DT Models and Data are the cores of the DT framework… the virtual model is timely combined with status datasets generated in the real environment (e.g. IoT data, image data, machine data and more) and can become available to the DT [digital twin] model through the underlying communication layer consisting of several protocols, such as HTTP and MQTT… a highly realistic DT is built and maintained by combining data captured through IoT technology (e.g. cameras, sensors), together with engineering data (e.g. kinematics, 3D geometry) from digital factory tools.” The information layer (i.e. processing unit) includes “Digital Twin…Data” (i.e. input value trajectories), which is obtained (i.e. selected) from the communication layer.)
Claim 16 is a method claim containing substantially the same elements as system claim 1. Alexopoulos and Burger teach the elements of claim 1, as shown above.
Claims 9-15 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Alexopoulos in view of Burger, and further in view of
Ghanta et al. (hereinafter Ghanta), U.S. Patent Application Publication US-20200034665-A1 (filed 07/30/2018).
Regarding Claim 9,
Alexopoulos teaches An industrial process model selection and generation system, comprising:
an input unit; and (Pg. 4, figure 1 shows “[u]tilization of the digital twin for the development of ML-based applications for smart manufacturing.” The digital twin’s “Communication Layer” (i.e. input unit) is shown in the right column.)
a processing unit; (Pg. 4, figure 1 shows “[u]tilization of the digital twin for the development of ML-based applications for smart manufacturing.” The digital twin’s “Information Layer” and “Services Layer” (i.e. processing unit) are shown in the right column.)
wherein, the input unit is configured to receive a plurality of input value trajectories comprising operational input value trajectories and simulation input value trajectories; (Pg. 4-5, section 2: “[T]he virtual model is timely combined with status datasets generated in the real environment (e.g. IoT data, image data, machine data and more) and can become available to the DT [digital twin] model through the underlying communication layer consisting of several protocols, such as HTTP and MQTT… a highly realistic DT is built and maintained by combining data captured through IoT technology (e.g. cameras, sensors), together with engineering data (e.g. kinematics, 3D geometry) from digital factory tools.” The communication layer (i.e. input unit) receives data captured through IoT technology (i.e. simulation input value trajectories) and data from digital factory tools (i.e. operational input value trajectories).)
wherein, the processing unit is configured to implement a simulator of the industrial process; (Pg. 4, figure 1 shows “[u]tilization of the digital twin for the development of ML-based applications for smart manufacturing.” The digital twin’s “Services Layer” (i.e. processing unit), shown in the right column, includes a “Simulation” service.)
wherein, the processing unit is configured to generate a plurality of industrial process behavioral data, wherein the industrial process behavioral data is generated for at least some of the plurality of input value trajectories, and wherein the generation of the industrial process behavioral data for the at least some of the plurality of input value trajectories comprises utilization of the simulator; (Pg. 5, section 2: “The DT framework described above may support the generation and labelling of virtually created datasets, via means of dynamic simulations over high fidelity and realistic DT models, for training ML models. Such virtually created training datasets are generated by a chain of Simulation and Dataset Generation and Labelling services acting upon the DT Model and Data. The Simulation service provides as an output a high-fidelity behaviour of a system (for example, a detailed and photorealistic animation of an industrial robotic cell environment).” The simulation service generates system behavior data (i.e. industrial process behavioral data) based on the DT data (i.e. input value trajectories).)
Alexopoulos does not appear to explicitly disclose the remaining claim 9 limitations.
However, Burger teaches wherein, the processing unit is configured to process a first behavioral data of the plurality of behavioral data with a first machine learning algorithm of the plurality of machine learning algorithms to determine a first machine learning algorithm first modelled result; and (0002: “Technology related to incremental training of machine learning tools is disclosed… The machine learning tool can be a deep neural network. Input data can be applied to the machine learning tool to generate an output of the machine learning tool.” Machine learning input data (i.e. first behavioral data) is fed into the machine learning tool (i.e. first machine learning algorithm) to determine a machine learning output (i.e. a first modelled result).)
wherein, the processing unit is configured to determine to train the first machine learning algorithm using the first behavioral data [or implement a second machine learning algorithm of the plurality of machine learning algorithms], and wherein the determination comprises a comparison of the first machine learning algorithm first modelled result with a performance condition. (0002: “A measure of prediction quality can be generated for the output of the machine learning tool. In response to determining the measure of prediction quality is below a threshold, incremental training of the operational parameters can be initiated using the input data as training data for the machine learning tool.” The quality of the model’s prediction (i.e. the first modelled result) is compared to a quality threshold (i.e. a performance condition) to determine whether to train the model using the input data (i.e. first behavioral data) as training data.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Alexopoulos and Burger. Alexopoulos teaches using a digital twin-based simulator to generate machine learning model training data for industrial manufacturing applications. Burger teaches incremental training of a machine learning model, where the model is only trained on new data when its performance on the new data is unsatisfactory. One of ordinary skill would have motivation to combine Alexopoulos and Burger because “[t]his approach can potentially: improve an overall accuracy of the deployed DNN model; reduce a communication workload between edge devices and the server computer; reduce a cost of data labeling by reducing or minimizing redundancy and/or repetition in the training data; reduce a DNN retraining cost by only processing more informative samples; and recycle unlabeled data collected on the edge devices by looking for informative samples” (Burger, 0024).
Alexopoulos and Burger do not appear to explicitly disclose wherein, the processing unit is configured to implement a plurality of machine learning algorithm that model the industrial process;
Determining to implement a second machine learning algorithm of the plurality of machine learning algorithms
However, Ghanta teaches wherein, the processing unit is configured to implement a plurality of machine learning algorithm that model the industrial process; (0068-0069: “[T]he logical machine learning layer 225 of FIG. 2B includes a plurality of training pipelines 204 a-b, executing on training devices 205 a-b… In the depicted embodiment, the training pipelines 204 a-b generate machine learning models for an objective, based on training data for the objective.” Multiple machine learning models are trained (i.e. implemented) for an objective (i.e. to model the industrial process).)
wherein, the processing unit is configured to determine to […] implement a second machine learning algorithm of the plurality of machine learning algorithms, and wherein the determination comprises a comparison of the first machine learning algorithm first modelled result with a performance condition. (0037-0038: “In one embodiment, the ML management apparatus 104 provides an improvement for machine learning systems by training a first or primary machine learning model for a first/primary machine learning algorithm using a training data set, validating the first machine learning model using a validation data set, the output of which is an error data set that describes the accuracy of the first machine learning model on the validation data set, and training a second machine learning model for a second/auxiliary machine learning algorithm using the error data set. The second machine learning algorithm is then used to predict, verify, validate, check, monitor, and/or the like the efficacy, accuracy, reliability, and/or the like of the first or primary machine learning model that is used to analyze an inference data set… if the health/suitability score satisfies an unsuitability threshold, indicating that the first machine learning model used to analyze the inference data set is not suitable for the inference training data, the ML management apparatus 104 may change the machine learning model…” A model suitability score based on model performance on validation data (i.e. first modelled result) is compared to an unsuitability threshold (i.e. a performance condition) to determine whether to change the machine learning model (i.e. implement a second machine learning algorithm).)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Alexopoulos, Burger, and Ghanta. Alexopoulos teaches using a digital twin-based simulator to generate machine learning model training data for industrial manufacturing applications. Burger teaches incremental training of a machine learning model, where the model is only trained on new data when its performance on the new data is unsatisfactory. Ghanta teaches determining the suitability of a machine learning model for data, where if the model is found to be unsuitable, action is taken including retraining or switching the model. One of ordinary skill would have motivation to combine Alexopoulos, Burger, and Ghanta in order to “determine which of the machine learning models is the best fit for the objective that is being analyzed” (Ghanta, 0071).
Regarding Claim 10, Alexopoulos, Burger, and Ghanta teach The system according to claim 9, as shown above.
Burger also teaches wherein the processing unit is configured to process a second behavioral data of the plurality of behavioral data with the first machine learning algorithm to determine a first machine learning algorithm second modelled result; and wherein the processing unit is configured to determine to train the first machine learning algorithm using the second behavioral data or implement the second machine learning algorithm, wherein the determination comprising a comparison of the first machine learning algorithm second modelled result with the performance condition. (Examiner notes that this claim is identical to the final two limitations of claim 9 except that it operates on a second input to determine a second model output, and determines whether to train using the second input. The training method disclosed by Burger is iterative, and thus includes processing multiple inputs to generate multiple outputs and make multiple training determinations. See, e.g., 0023: “As described herein, the accuracy of the DNN model can potentially be improved by selectively using input data collected by the edge devices to incrementally train the DNN model.”)
Regarding Claim 11, Alexopoulos, Burger, and Ghanta teach The system according to claim 9, as shown above.
Burger also teaches wherein the processing unit is configured to process the first behavioral data with the second machine learning algorithm to determine a second machine learning algorithm first modelled result; and wherein the processing unit is configured to determine to train the second machine learning algorithm using the first behavioral data or implement a third machine learning algorithm of the plurality of machine learning algorithms, wherein the determination comprises a comparison of the second machine learning algorithm first modelled result with the performance condition. (Examiner notes that this claim is identical to the final two limitations of claim 9 except that it processes input using the second model and determines whether to train the second model or implement a third model (i.e. a repetition of the same process once the second model has been implemented). The training method disclosed by Burger is iterative (see the portion of 0023 cited above in regard to claim 10), and thus, in combination with Ghanta’s model switching, includes processing input with multiple models to make multiple training determinations.)
Regarding Claim 12, Alexopoulos, Burger, and Ghanta teach The system according to claim 9, as shown above.
Burger also teaches wherein the determination to train the existing machine learning algorithm using behavioral data or implement a new machine learning algorithm comprises a determination of a sensitivity of the existing machine learning algorithm to at least a portion of the plurality of behavioral data. (0002: “A measure of prediction quality can be generated for the output of the machine learning tool. In response to determining the measure of prediction quality is below a threshold, incremental training of the operational parameters can be initiated using the input data as training data for the machine learning tool.” The determination to train using input data (i.e. first behavioral data) is based on a measure of model prediction quality (i.e. sensitivity) on the input data (i.e. a portion of behavioral data).)
Regarding Claim 13, Alexopoulos, Burger, and Ghanta teach The system according to claim 9, as shown above.
Burger also teaches wherein the processing unit is configured to determine to stop training of the existing machine learning algorithm, wherein the determination comprises a determination of a sensitivity of the trained machine learning algorithm to at least a portion of the plurality of behavioral data. (0002: “A measure of prediction quality can be generated for the output of the machine learning tool. In response to determining the measure of prediction quality is below a threshold, incremental training of the operational parameters can be initiated using the input data as training data for the machine learning tool.” The determination not to train (i.e. to stop training) is based on a measure of model prediction quality (i.e. sensitivity) on the input data (i.e. a portion of behavioral data).)
Regarding Claim 14, Alexopoulos, Burger, and Ghanta teach The system according to claim 12, as shown above.
Burger also teaches wherein the determination of the sensitivity of the trained machine learning algorithm to at least the portion of the plurality of behavioral data comprises an analysis of a loss function of the trained machine learning algorithm with respect to at least the portion of the plurality of behavioral data. (0037-0039: “The quality analyzer 180 can also determine a quality of the results from the machine learning tool 170 in an unsupervised manner using mathematical and/or statistical properties of outputs of the machine learning tool 170. For example, the machine learning tool 170 can include a deep neural network and a perplexity of the outputs of the last layer can be used to determine the quality of the results. Perplexity can be calculated in various ways, but perplexity is a measure of a variability of a prediction model and/or a measure of prediction error… In some examples, a log of the perplexity value can be used for simplification. At a given training epoch, a low log-perplexity value implies that the sample is a typical sample and that the neural network model is not “surprised” with the particular sample. In other words, the sample has a relatively low loss value.” The determination of model result quality (i.e. sensitivity) can be based on an analysis of the perplexity value or loss value of the input data (i.e. portion of behavioral data).)
Regarding Claim 15, Alexopoulos, Burger, and Ghanta teach The system according to claim 9, as shown above.
Alexopoulos also teaches wherein the processing unit is configured to select the at least some of the plurality of input value trajectories. (Pg. 4-5, section 2: “DT Models and Data are the cores of the DT framework… the virtual model is timely combined with status datasets generated in the real environment (e.g. IoT data, image data, machine data and more) and can become available to the DT [digital twin] model through the underlying communication layer consisting of several protocols, such as HTTP and MQTT… a highly realistic DT is built and maintained by combining data captured through IoT technology (e.g. cameras, sensors), together with engineering data (e.g. kinematics, 3D geometry) from digital factory tools.” The information layer (i.e. processing unit) includes “Digital Twin…Data” (i.e. input value trajectories), which is obtained (i.e. selected) from the communication layer.)
Claim 17 is a method claim containing substantially the same elements as system claim 9. Alexopoulos, Burger, and Ghanta teach the elements of claim 9, as shown above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN M ROHD whose telephone number is (571)272-6445. The examiner can normally be reached Mon-Thurs 8:00-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.M.R./Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147