DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 30 January 2026 [hereinafter Response] has been entered, where:
Claims 1, 9, 17 have been amended.
Claims 1-20 are pending.
Claims 1-20 are rejected.
Claim Rejections – 35 U.S.C. § 112
3. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
4. Claim 2, 14, 15, and 20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 2, line 12, recites “[(f)] providing feedback data to the first machine learning model, indicative of how accurately the second third machine learning model distinguished synthetic from measured sensor time series data;” the term “the second third machine learning model” is indefinite because it unclear whether the claim is directed to either of a second machine learning model, a third machine learning model, or a combination thereof.
Claim 14, line 3, recites training “a second machine learning model,” which is indefinite because claim 9, lines 17-18, also recites training “a second machine learning model,” and accordingly, is unclear whether the term is intended to draw antecedence from the earlier occurrence of the term or be considered an additional form of model and/or training thereof.
Claim 15 depends from claim 14, and is rejected as depending from a rejected claim; further, the claim fails to cure the deficiencies of claim 15.
Claim 20, line 3, recites training “a second machine learning model,” which is indefinite because claim 17, lines 17-18, also recites training “a second machine learning model,” and accordingly, is unclear whether the term is intended to draw antecedence from the earlier occurrence of the term or be considered an additional form of model and/or training thereof.
Claim Rejections - 35 U.S.C. § 101
5. 35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
6. Claims 2, 6, 7, 10, 14, 15, 18, and 20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 2 depends from claim 1, which recites a “method,” which is a process, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitation of “[(e.3.1)] the third machine learning model is configured to distinguish between synthetic sensor time series data and measured sensor time series data.” The limitation of “distinguish” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), that is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 2 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “first” “second,” and “third” machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. The claim also recites more details or specifics of the “third machine learning model,” where “the third machine learning model comprising a discriminator,” and accordingly, is merely more specific to the generic computer component.
The claim also recites, via claim 1, the limitation of “[(a)] the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not integrate the abstract idea to a practical application.
The claim also recites, via claim 1, limitations of “[(a)] providing a random or pseudo-random input,” “[(b)] providing second data . . . to the first trained machine learning model,” “[(c)] receiving an output from the first trained machine learning model based on the first data and the second data,” and “[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] providing” “[(c)] receiving” and “[(d)] utilizing” are pre- and post-processing, insignificant extra-solution activities of mere data gathering and using the data to train a second machine learning model, (MPEP § 2106.05(g)), that does not integrate the abstract idea into a practical application.
Claim 1 also recites more details or specifics to the additional element of “[(b)] providing second data,” where “[(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber, or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element. Claim 1 also recites more specifics or details to the additional element of “[(c)] receiving an output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] wherein the output is generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element.
Dependent claim 2 also recites a “third machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. Claim 2 also recites the limitations of “[(e)] training the first machine learning model,” “[(f)] providing feedback data to the first machine learning model, indicative of how accurately the second third machine learning model distinguished synthetic from measured sensor time series data,” and “[(g)] updating the first machine learning model to generate synthetic sensor time series data that the third machine learning model less accurately distinguishes from measured sensor time series data,” in which “[(e)] training,” “[(f)] providing the feedback data,” and “[(g)] updating” are the use of a generic computer component (first machine learning model, second machine learning model, third machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea to into a practical application. Therefore, claim 2 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The claim recites a “first,” a “second,” and a “third,” “machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim also recites more details or specifics of the “third machine learning model,” where “the third machine learning model comprising a discriminator,” and accordingly, is merely more specific to the generic computer component. The claim also recites “utilizing the synthetic sensor time series data to train a second machine learning model in connection with the processing chamber.” Which is merely the use of a generic computer component (second machine learning model) to implement the abstract idea, (MPEP §2106.05(f)), that does not amount to significantly more than the abstract idea.
The claim also recites, in claim 1, the limitation of “[(a) providing the first data] . . . to a first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not amount to significantly more than the abstract idea.
Claim 1 also recites limitations of “[(a)] providing a random or pseudo-random input,” “[(b)] providing second data . . . to the first trained machine learning model,” “[(c)] receiving an output from the first trained machine learning model,” and “[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] providing,” “[(c)] receiving an output,” and “[(d)] utilizing” are well-understood, routine, and conventional activities of storing and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea.
The claim provides more details or specifics to the additional element of “[(b)] providing second data,” where “[(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element. Claim 1 also recites more specifics or details to the additional element of “[(c)] receiving an output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] wherein the output is generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element.
Claim 2 also recites a “third machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Claim 2 also recites the limitations of “[(e)] training the first machine learning model,” “[(f)] providing feedback data to the first machine learning model, indicative of how accurately the second third machine learning model distinguished synthetic from measured sensor time series data,” and “[(g)] updating the first machine learning model to generate synthetic sensor time series data that the third machine learning model less accurately distinguishes from measured sensor time series data,” in which “[(e)] training,” “[(f)] providing the feedback data,” and “[(g)] updating” are the use of a generic computer component (first machine learning model, second machine learning model, third machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Therefore, claim 2 is subject-matter ineligible.
Claim 6 depends from claim 1, which recites a “method,” which is a process, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101).
However, under Step 2A Prong One, the claim recites the limitation of “[(e.2.1)] wherein the second machine learning model is configured to predict attributes of the processing chamber based on measured sensor time series data of the processing chamber.” The limitation of “[(e.2.1)] predict” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), that is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 6 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include a “first machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application.
The claim also recites, via claim 1, the limitation of “[(a)] providing first data . . . to a first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not integrate the abstract idea to a practical application.
Independent claim 1 also recites limitations of “[(a)] providing a random or pseudo-random input,” “[(b)] providing second data . . . to the first trained machine learning model,” “[(c)] receiving an output from the first trained machine learning model based on the first data and the second data,” and “[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] providing” “[(c)] receiving” and “[(d)] utilizing” are pre- and post-processing, insignificant extra-solution activities of mere data gathering and using the data to train a second machine learning model, (MPEP § 2106.05(g)), that does not integrate the abstract idea into a practical application. Claim 1 also recites more details or specifics to the additional element of “[(b)] providing second data,” where “[(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber, or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element. Claim 1 also recites more specifics or details to the additional element of “[(c)] receiving an output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] wherein the output is generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element.
Dependent claim 6 also recites the limitation of “[(e)] training a second machine learning model,,” which is the use of a generic computer component (second machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. The claim recites that the training further comprises: “[(e.1)] providing the output synthetic sensor time series data to the second machine learning model as training input,” and “[(e.2)] providing first data indicative of one or more attributes associated with the output synthetic sensor time series data to the second machine learning model as target output,” in which “[(e.1), (e.2)] providing” is the insignificant extra-solution activity of mere data gathering, (MPEP § 2106.05(g)), which does not integrate the abstract idea into a practical application. Therefore, claim 6 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The claim recites a “first machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea.
The claim also recites, via claim 1, the limitation of ““[(a)] providing first data . . . to a first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), , that does not amount to significantly more than the abstract idea.
The claim also recites, via claim 1, limitations of “[(a)] providing a random or pseudo-random input,” “[(b)] providing second data . . . to the first trained machine learning model,” “[(c)] receiving an output from the first trained machine learning model based on the first data and the second data,” and “[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] providing” “[(c)] receiving” and “[(d)] utilizing” are well-understood, routine, and conventional activities of storing and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea.
The claim provides more details or specifics to the additional element of “[(b)] providing second data,” where “[(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element. Claim 1 also recites more specifics or details to the additional element of “[(c)] receiving an output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] wherein the output is generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element.
Claim 6 also recites a “second machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Claim 6 also recites the limitation of “[(e)] training a second machine learning model,” which is the use of a generic computer component (second machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites that the training further comprises: “[(e.1)] providing the output synthetic sensor time series data to the second machine learning model as training input,” and “[(e.2)] providing first data indicative of one or more attributes associated with the output synthetic sensor time series data to the second machine learning model as target output,” in which “[(e.1), (e.2)] providing” is the well-understood, routine, and conventional activity of storing and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. Therefore, claim 6 is subject-matter ineligible.
Claim 7 depends from claim 6. The claim recites “[e.2.2)] wherein the second machine learning model is configured to detect one or more anomalies associated with measured sensor time series data of the processing chamber,” which the activity of “[(e.2.2)] to detect” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), and is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Therefore, claim 7 is subject-matter ineligible.
Claim 10 depends from claim 9, which recites a “system,” which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101).
However, under Step 2A Prong One, the claim recites the limitation of “[(e.2.1)] the second machine learning model is configured to distinguish between synthetic sensor time series data and measured sensor time series data.” The limitation of “[(e.2.1)] distinguish” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), that is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 10 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim 9 beyond the identified judicial exception include “memory and a processing device coupled to the memory,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not serve to integrate the abstract idea into a practical application. Claim 9 also recites a “first,” “second,” “third” “machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application.
The claim also recites, via claim 9, the limitation of “[(a)] the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not integrate the abstract idea to a practical application.
The claim also recites, via claim 9, limitations of “[(a)] provide first data comprising a random or pseudo-random input,” “[(b)] provide second data . . . to the first trained machine learning model,” “[(c)] receive an output from the first trained machine learning model,” and “[(d)] utilize the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] provide,” “receive,” and “[(d)] utilize,” are insignificant extra-solution activities of mere data gathering, (MPEP § 2106.05(g)), that does not integrate the abstract idea into a practical application.
Independent claim 9 also recites more specifics or details to the additional element of “[(c)] receive the output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] wherein the output is generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element.
The claim provides more details or specifics to the additional element of “[(b)] provide second data,” where “[(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element.
Claim 10 also recites a “third machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. Claim 10 also recites the limitations of “[(e)] train the first machine learning model,” “[(e.1)] causing the first machine learning model to generate synthetic sensor time series data,” “[(e.2)] providing the synthetic sensor time series data to a third machine learning model,” “[(e.3)] providing measured sensor time series data to the third machine learning model,” “[(e.4)] providing feedback data to the first machine learning model indicative of how accurately the third machine learning model distinguished synthetic from measured sensor time series data,” “[(e.5)] updating the first machine learning model to generate synthetic sensor time series data that the third machine learning model less accurately distinguishes from measured sensor time series data,” in which “[(e)] training,” “[(e.1)] causing,” “[(e.2), (e.3)] providing,” and “[(e.4)] updating” are the use of a generic computer components (memory, processing device, first machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea to into a practical application. Therefore, claim 10 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The claim recites a “memory and a processing device coupled to the memory,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not amount to significantly more than the abstract idea. Claim 9 also recites a “first machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea.
The claim also recites, via claim 9, the limitation of “[(a)] . . . the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not amount to significantly more than the abstract idea.
The claim also recites, in claim 9, limitations of “[(a)] provide the first data comprising providing a random or pseudo-random input,” “[(b)] provide second data . . . to the first trained machine learning model,” “[(c)] receive an output from the first trained machine learning model,” and “[(d)] utilize the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] provide,” “[(c)] receive,” and “[(d)] utilize,” are well-understood, routine, and conventional activities of storing and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea.
The claim also provides, via claim 9, more details or specifics to the additional element of “[(b)] providing second data,” where “[(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element.
Claim 9 also recites more specifics or details to the additional element of “[(c)] receive the output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] wherein the output is generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element. Claim 10 recites a “third machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Claim 10 also recites the limitations of “[(e))] train the first machine learning model,” “[(e.1)] causing the first machine learning model to generate synthetic sensor time series data,” “[(e.2)] providing the synthetic sensor time series data to a third machine learning model,” “[(e.3)] providing measured sensor time series data to the third machine learning model,” “[(e.4)] providing feedback data to the first machine learning model indicative of how accurately the third machine learning model distinguished synthetic from measured sensor time series data,” “[(e.5)] updating the first machine learning model to generate synthetic sensor time series data that the third machine learning model less accurately distinguishes from measured sensor time series data,” in which “[(e)] training,” “[(e.1)] causing,” “[(e.2), (e.3)] providing,” and “[(e.4)] updating” are the use of a generic computer component (memory, processing device, first machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Therefore, claim 10 is subject-matter ineligible.
Claim 14 depends from claim 9, which recites a “system,” which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101).
However, under Step 2A Prong One, the claim recites the limitation of “[(e.3.1)] wherein the second machine learning model is configured to predict attributes of the processing chamber based on measured sensor time series data of the processing chamber.” The limitation of “predict” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), that is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 14 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim 9 beyond the identified judicial exception include “memory and a processing device coupled to the memory,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not serve to integrate the abstract idea into a practical application. Claim 9 also recites a “first,” and “second,” “machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application.
The claim also recites, via claim 9, the limitation of “[(a)] the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not integrate the abstract idea to a practical application.
The claim also recites, via claim 9, limitations of “[(a)] provide first data comprising a random or pseudo-random input,” “[(b)] provide second data . . . to the first trained machine learning model,” “[(c)] receive an output from the first trained machine learning model,” and “[(d)] utilize the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] provide,” “receive,” and “[(d)] utilize,” are insignificant extra-solution activities of mere data gathering, (MPEP § 2106.05(g)), that does not integrate the abstract idea into a practical application.
Independent claim 9 also recites more specifics or details to the additional element of “[(c)] receive the output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] wherein the output is generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element.
The claim provides more details or specifics to the additional element of “[(b)] provide second data,” where “[(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element.
Claim 14 also recites a “second machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. Claim 14 also recites the limitation of “[(e)] train a second machine learning model,” which is the use of a generic computer component (second machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. The claim recites that the training further comprises: [(e.1)] providing the output synthetic sensor time series data to the second machine learning model as training input,” and “[(e.2)] providing first data indicative of one or more attributes associated with the output synthetic sensor time series data to the second machine learning model as target output,” in which “[(e.1), (e.2)] providing” is the insignificant extra-solution activity of mere data gathering, (MPEP § 2106.05(g)), which does not integrate the abstract idea into a practical application. Therefore, Claim 14 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The claim recites “memory and a processing device coupled to the memory,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not amount to significantly more than the abstract idea. Claim 9 also recites a “first” and a “second” “machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea.
The claim also recites, via claim 9, the limitation of “[(a)] the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not amount to significantly more than the abstract idea.
The claim also recites, in claim 9, limitations of “[(a)] provide first data comprising a random or pseudo-random input,” “[(b)] provide second data . . . to the first trained machine learning model,” “[(c)] receive an output from the first trained machine learning model,” and “[(d)] utilize the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] provide,” “receive,” and “[(d)] utilize,” are insignificant extra-solution activities of mere data gathering, (MPEP § 2106.05(g)), and accordingly, are merely more specific to the additional element.
Claim 14 also recites a “second machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Claim 14 also recites the limitation of “[(e)] train a second machine learning model,” which is the use of a generic computer component (second machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites that the training further comprises: “[(e.1)] providing the output synthetic sensor time series data to the second machine learning model as training input,” and “[(e.2)] providing first data indicative of one or more attributes associated with the output synthetic sensor time series data to the second machine learning model as target output,” in which “[(e.1), (e.2)] providing” is the well-understood, routine, and conventional activity of storing and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. Therefore, Claim 14 is subject-matter ineligible.
Claim 15 depends from claim 14. The claim recites the limitation of “wherein the second machine learning model is configured to detect one or more anomalies associated with measured sensor time series data of the processing chamber,” which is a mental process, (MPEP § 2106.04(a)(2) sub III), and is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Therefore, claim 15 is subject-matter ineligible.
Claim 18 depends from claim 17, which recites a “non-transitory machine-readable storage medium,” which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101).
However, under Step 2A Prong One, the claim recites the limitation of “[(e.3)] providing measured sensor time series data to the third machine learning model], [(e.3.1)] the second machine learning model is configured to distinguish between synthetic sensor time series data and measured sensor time series data.” The limitation of “distinguish” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), that is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 18 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim 17 beyond the identified judicial exception include “non-transitory machine-readable storage medium storing instructions which, when executed, cause a processing device to perform operations,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not serve to integrate the abstract idea into a practical application.
Claim 17 also recites a “first machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application.
The claim also recites, via claim 17, the limitation of “[(a)] . . . the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not integrate the abstract idea to a practical application.
The claim also recites, via claim 17, limitations of “[(a)] providing the first data comprising a random or pseudo-random input,” “[(b)] providing second data . . . to the first trained machine learning model,” “[(c)] receiving an output from the first trained machine learning model, and “[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] providing,” “[(c)] receiving” and “[(d)] utilizing” are insignificant extra-solution activities of mere data gathering and post-processing data transmission, (MPEP § 2106.05(g)), that does not integrate the abstract idea into a practical application. The claim provides more details or specifics to the additional element of “providing second data,” where “the second data comprising one or more labels identifying one or more of a data source of interest in association with the processing chamber or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element. Claim 17 also recites more specifics or details to the additional element of “[(c)] receiving the output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] generated in view of the second data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element. Claim 18 also recites a “third machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. Claim 18 also recites the limitations of “[(e)] training the first machine learning model,” “[(e.1)] causing the first machine learning model to generate synthetic sensor time series data,” “[(e.2)] providing the synthetic sensor time series data to a third machine learning model,” “[(e.3)] providing measured sensor time series data to the third machine learning model,” “[(e.4)] providing feedback data to the first machine learning model, indicative of how accurately the third machine learning model distinguished synthetic from measured sensor time series data,” and “[(e.5)] updating the first machine learning model to generate synthetic sensor time series data that the third machine learning model less accurately distinguishes from measured sensor time series data.” in which “[(e)] training,” “[(e.1)] causing,” “[(e.2), (e.3), (e.4)] providing,” and “[(e.5)] updating” are the use of a generic computer components (non-transitory machine readable storage medium, processing device, first machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea to into a practical application. Therefore, claim 18 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The claim recites “non-transitory machine-readable storage medium storing instructions which, when executed, cause a processing device to perform operations,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not amount to significantly more than the abstract idea.
Claim 17 also recites a “first machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea.
The claim also recites, via claim 17, the limitation of “[(a)] . . . the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not amount to significantly more than the abstract idea.
The claim also recites, via claim 17, limitations of “[(a)] providing the first data comprising a random or pseudo-random input,” “[(b)] providing second data . . . to the first trained machine learning model,” “[(c)] receiving an output from the first trained machine learning model,” and .” “[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] providing,” “[(c)] receiving” and [(d)] utilizing” are well-understood, routine, and conventional activities of storing and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. The claim provides more details or specifics to the additional element of “providing second data,” where “the second data comprising one or more labels identifying one or more of a data source of interest in association with the processing chamber or a state of interest of the processing chamber,” and accordingly, is merely more specific to the additional element. Claim 17 also recites more specifics or details to the additional element of “[(c)] receiving the output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] generated in view of the second data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element. Claim 18 recites a “third machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Claim 18 also recites the limitations of “[(e)] training the first machine learning model,” “[(e.1)] causing the first machine learning model to generate synthetic sensor time series data,” “[(e.2)] providing the synthetic sensor time series data to a third machine learning model,” “[(e.3)] providing measured sensor time series data to the third machine learning model,” “[(e.4)] providing feedback data to the first machine learning model, indicative of how accurately the third machine learning model distinguished synthetic from measured sensor time series data,” and “[(e.5)] updating the first machine learning model to generate synthetic sensor time series data that the third machine learning model less accurately distinguishes from measured sensor time series data,” in which “[(e)] training,” “[(e.1)] causing,” “[(e.2), (e.3), (e.4)] providing,” and “[(e.5)] updating” are the use of a generic computer component (memory, processing device, first machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Therefore, claim 18 is subject-matter ineligible.
Claim 20 depends from claim 17, which recites a “system,” which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101).
However, under Step 2A Prong One, the claim recites the limitation of “[(e.2)] wherein the second machine learning model is configured to predict attributes of the processing chamber based on measured sensor time series data of the processing chamber.” The limitation of “[(e.2)] predict” is a mental process, (MPEP § 2106.04(a)(2) sub III), that is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). Thus, claim 20 recites an abstract idea.
Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “non-transitory machine-readable storage medium storing instructions which, when executed, cause a processing device to perform operations,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not serve to integrate the abstract idea into a practical application.
Claim 17 also recites a “first machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. The claim also recites, in claim 17, the limitation of “[(e)] . . . the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not integrate the abstract idea to a practical application.
The claim also recites, via claim 17, limitations of “[(a)] providing first data comprising a random or pseudo-random input,” “[(b)] providing second data . . . to the first trained machine learning model,” “[(c)] receiving an output from the first trained machine learning model,” and “[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] providing” “[(c)] receiving” and “[(d)] utilizing” are insignificant extra-solution activities of mere data gathering and post-processing data transmission, (MPEP § 2106.05(g)), that does not integrate the abstract idea into a practical application. The claim also recites more specifics or details to the additional element of “[(c)] receiving the output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element. Claim 20 also recites a “second machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. Claim 20 also recites the limitation of “[(e)] training a second machine learning model,” which is the use of a generic computer component (non-transitory machine-readable storage medium, processing device, second machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not integrate the abstract idea into a practical application. The claim recites that the training further comprises: [(e.1)] providing the output synthetic sensor time series data to the second machine learning model as training input,” and “[(e.2)] providing third data indicative of one or more attributes associated with the output synthetic sensor time series data to the second machine learning model as target output,” in which “[(e.1), (e.2)] providing” is the insignificant extra-solution activity of mere data gathering, (MPEP § 2106.05(g)), which does not integrate the abstract idea into a practical application. Therefore, Claim 20 is directed to the abstract idea.
Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The claim recites “non-transitory machine-readable storage medium storing instructions which, when executed, cause a processing device to perform operations,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not amount to significantly more than the abstract idea.
Claim 17 also recites a “first machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim also recites, via claim 17, the limitation of “[(a)] . . . the first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber,” where the “processing chamber” is generally linking the abstract idea to particular technological environment or field of use for analyzing sensor time series data of industrial sensors, (MPEP § 2106.05(h)), that does not amount to significantly more than the abstract idea.
The claim also recites, via claim 17, limitations of “[(a)] providing the first data comprising a random or pseudo-random input,” “[(b)] providing first data . . . to the first trained machine learning model,” “[(c)] receiving an output from the first trained machine learning model,” and “[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber.” These activities of “[(a), (b)] providing” “[(c)] receiving” and “[(d)] utilizing,” are well-understood, routine, and conventional activities of storing and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. Claim 17 also recites more specifics or details to the additional element of “[(c)] receiving the output,” including “[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber,” and “[(c.2)] generated in view of the first data indicative of one or more attributes,” and accordingly, are merely more specific to the additional element. Claim 20 also recites a “second machine learning model,” which is recited at a high level of generality, and accordingly, is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. Claim 20 also recites the limitation of “[(e)] training a second machine learning model,” which is the use of a generic computer component (non-transitory machine-readable storage medium, processing device, second machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites that the training further comprises: “[(e.1)] providing the output synthetic sensor time series data to the second machine learning model as training input,” and “[(e.2)] providing third data indicative of one or more attributes associated with the output synthetic sensor time series data to the second machine learning model as target output,” ,” in which “[(e.1), (e.2)] providing” is the well-understood, routine, and conventional activity of storing and retrieving information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. Therefore, Claim 20 is subject-matter ineligible.
Claim Rejections - 35 U.S.C. § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
9. Claims 1-20 are rejected under 35 U.S.C. § 103 as being unpatentable over US Published Application 20230104028 to Wang et al. [hereinafter Wang] in view of US Published Application 20200342362 to Soni et al. [hereinafter Soni].
Regarding claims 1, 9 and 17, Wang teaches [a] method (Wang ¶ 0018), [a] system (Wang ¶ 0082), and [a] non-transitory machine-readable storage medium (Wang ¶ 0081), comprising:
[(a)] providing first data comprising a random or pseudo-random input (Wang, claim 6, teaches “execute a random noise generator to provide random noise into the functional processor [(that is, providing first data comprising a random . . . input)]”) to a first trained machine learning model trained to generate synthetic sensor time series data for a processing chamber (Wang, Fig. 3, teaches a high-level architecture of the [Functional Generative Adversarial Network (F-GAN)] 200 [Examiner annotations in dashed-line text boxes]:”
PNG
media_image1.png
246
753
media_image1.png
Greyscale
Wang ¶ 0036 teaches “Functional Generative Adversarial Network building module 200 is where F-GAN is trained with raw historical sensor time series of failures to synthesize additional failure instances that follow the same dynamics with the observed failures in the form of a trained functional generator [(that is, to a first trained machine learning model trained to generate synthetic sensor time series data)]”; Wang ¶ 0004 teaches “failure prediction systems for industrial equipment employ data analytics models to predict the probability of not performing as desired within the next several days, based on sensor data from equipment [(that is, “equipment” and “industrial IoT” is a processing chamber)]”;
[Examiner notes that the broadest reasonable interpretation of “processing chamber” is directed to manufacturing equipment and/or industrial processes, which is consistent with the Specification, (MPEP § 2111), and accordingly covers the teachings of Wang pertaining to ” Industrial Internet-of-Things (IIoT) across many industries including manufacturing, logistics, energy, mining, oil, and gas” (Wang ¶ 0002)]);
[(b)] providing second data indicative of one or more attributes of target synthetic sensor time series data (Wang ¶ 0008 & Fig. 1, teaches “Sensor data exhibit complex temporal patterns within each sensor and temporal covariations among different sensors, due to the mechanism of physics systems and the connectivity of components within industrial systems [(that is, “complex temporal patterns” and/or “temporal covariations” providing a second data indicative of one or more attributes)]”) to the first trained machine learning model (Wang, Fig. 4(a), teaches “flow of the F-GAN building module 200 [Examiner annotations in dashed-line text boxes]:”
PNG
media_image2.png
630
1026
media_image2.png
Greyscale
Wang ¶ 0043 teaches “[a]t 401 the raw sensor data [(that is, providing the first data . . . to the first trained machine learning model)] is supplied into the sparse multivariate [Functional Principal Component Analysis (FPCA)] to extract continuous temporal patterns. These continuous temporal patterns represent the major modes of variation among sensor data corresponding to failure events [(that is, “major modes of variation” is indicative of one or more attributes of target synthetic sensor time series data)]”; also, Wang ¶ 0063 teaches “[t]he one or more physical systems 801 can involve any kind of asset, apparatus, or physical systems that have sensor systems that can provide sensor data in an IoT environment, such as, but not limited to, edge sensor arrays, robotic arms, vehicles, lathes, air compressors, and so on in accordance with the desired implementation”), . . . ;
[(c)] receiving an output from the first trained machine learning model based on the first data and the second data (Wang ¶ 0042 teaches “the functional generator 300, the flow at 401 to 403 renders a functional generator [403] that generates continuous random curves that follow the same stochastics as the actual failures; Wang ¶ 0045 teaches “a functional processor [403] first deploys the fully connected neural network to map the random noises [402] into random variables following a complex statistical distribution with tunable parameters [(that is, first data comprising a random . . . input)]. Next, the functional processor combines the achieved random variable and the extracted patterns from 401 [(that is, second data indicative of one or more attributes of target synthetic sensor time series data)] to produce new realizations of continuous time series that resemble the real sensor data corresponding to failures. [(that is, receiving an output from the first trained machine learning model based on the first data and the second data)]”),
[(c.1)] wherein the output comprises synthetic sensor time series data associated with the processing chamber and the data source of interest or the state of interest, [(c.2)] wherein the output is generated in view of the second data indicative of one or more attributes (Wang ¶ 0045 teaches “a functional processor [403] first deploys the fully connected neural network to map the random noises into random variables following a complex statistical distribution with tunable parameters. Next, the functional processor [403] combines the achieved random variable and the extracted patterns from 401 to produce new realizations of continuous time series that resemble the real sensor data corresponding to failures [(that is, the “new realizations of continuous time series” is the output comprise synthetic sensor time series data associated with the processing chamber, wherein the output is generated in view of the first data indicative of one or more attributes)]”); and
[(d)] utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber (Wang, Fig. 6, teaches example flow of the failure prediction model building module [Examiner annotations in dashed-line text boxes]:
PNG
media_image3.png
574
870
media_image3.png
Greyscale
Wang ¶ 0056 teaches “The input is the data set containing the synthetic failure instances, and the observed data instances. This new data set is more balanced than the raw data set in which there is much more non-failure data and a limited amount of failure instances. The output of the module is the trained failure predication model based on this data set; Wang ¶ 0063 teaches “[o]ne or more physical systems 801 are communicatively coupled . . . , which is connected to a management apparatus 802. The management apparatus 802 manages a database 803, which contains historical data collected from the air compressors from each of the physical systems 801 and also facilitates remote control to each of the physical systems 801 [(that is, “physical system control” is utilizing the synthetic sensor time series data to train a second machine learning model to control operation of the processing chamber)]”; see also Wang ¶ 0078 regarding “control the information flow”).
Though Wang teaches the feature of failure / non-failure label data indicated by past failure records, Wang, however, does not explicitly teach –
* * *
[(b) providing second data] . . . , [(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber, or a state of interest of the processing chamber;
* * *
But Soni teaches –
* * *
[(b) providing second data] . . . , [(b.1)] the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber, or a state of interest of the processing chamber (Soni, Table 1, teaches example events can include one or more case, maintenance, flow, agent, emergency, etc., events such as shown in the example of Table 1.
PNG
media_image4.png
403
736
media_image4.png
Greyscale
Soni ¶ 0051 teaches “data contexts are represented in Table 1 . . . , associated with example artificial intelligence models that can provide a prediction, detection, and/or classification using the respective data source”; Soni ¶ 0091 teaches “synthetic data can be multi-channel data and can include a channel in which synthetic events, labels, and/or other annotations associated with the series data is also generated (e.g., as shown in the example of Table 1 and FIGS. 6A-7, etc.). In certain examples, correlation(s) between time series data channels are learned/identified and used to produce an event in an annotation channel [(that is, the second data comprising one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber, or a state of interest of the processing chamber)]”);
* * *
Wang and Soni are from the same or similar field of endeavor.
Wang teaches a failure prediction system for incoming failures (i.e., production failure that results in higher than expected product defect rate, industrial equipment failure) before they occur.
Soni teaches synthetic time series data generation apparatus is to generate a synthetic data set including multi-channel timeseries data and associated annotation using a first artificial intelligence network model.
Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Wang pertaining to an industrial failure prediction system with the synthetic time series data generation apparatus of Soni.
The motivation to do so is to because ”large datasets are necessary for computer-driven solutions, such as neural networks and other "artificial intelligence" to assist human clinicians with analysis, optimization, improvement, and/or other decision support. Such large datasets are often missing or unobtainable with current systems and restrictions.” (Soni ¶ 0005).
Regarding claim 2, the combination of Wang and Soni teaches all of the limitations of claim 1, as described above in detail.
Wang teaches -
further comprising:
training the first machine learning model (Wang ¶ 0038 teaches “a functional generator 300 configured to generate multivariate continuous sensor curves 312 from training with arbitrary multivariate sensor data with irregular timestamps 310 received from one or more apparatuses [(that is, training the first machine learning model)]”), wherein training the first machine learning model comprises:
causing the first machine learning model to generate synthetic sensor time series data (Wang ¶ 0045 teaches “a functional processor [403] first deploys the fully connected neural network to map the random noises into random variables following a complex statistical distribution with tunable parameters. Next, the functional processor [403] combines the achieved random variable and the extracted patterns from 401 to produce new realizations of continuous time series that resemble the real sensor data corresponding to failures [(that is, causing the first machine learning model to generate synthetic sensor time series data)]”);
providing the synthetic sensor time series data to a third machine learning model (Wang, Fig. 4(a), teaches “flow of the F-GAN building module 200 [Examiner annotations in dashed-line text boxes]:”
PNG
media_image5.png
575
862
media_image5.png
Greyscale
Wang ¶ 0046 teaches “the functional discriminator 301 [(that is, a third machine learning model)] distinguishes the generated data [(that is, the synthetic sensor time series data)] from the actual data, given the time series with arbitrary granularities”) comprising a discriminator (Wang ¶ 0016 teaches “a functional generator and a functional discriminator, where the generator produces synthetic sensor data corresponding to failure events and the discriminator detects the fake data from the sensor data of actual failure [(that is, a third machine learning model comprising a discriminator)]”) ;
providing measured sensor time series data to the third machine learning model, wherein the third machine learning model is configured to distinguish between synthetic sensor time series data and measured sensor time series data (Wang ¶ 0046 teaches “the functional discriminator 301 distinguishes the generated data [(that is, synthetic sensor time series data)] from the actual data [(that is, measured sensor time series data)], given the time series with arbitrary granularities [(that is, providing measured sensor time series data to the second machine learning model, wherein the second machine learning model is configured to distinguish between synthetic sensor time series data and measured sensor time series data)]”; Wang ¶ 0047 teaches “the synthetic and real sensor data corresponding to failure events are provided into the MPFNN-based functional discriminator [301] that attempts to sort out the synthetic failure data;
providing feedback data to the first machine learning model (Wang ¶ 0018 teaches “providing feedback to the functional generator [300] to retrain the functional generator [300] [(that is, providing feedback data to the first machine learning model)]”), indicative of how accurately the third machine learning model distinguished synthetic from measured sensor time series data (Wang ¶ 0047 & Fig. 4(A) teaches “all the parameters in the above procedures are trained to solve the following min-max problem with objective function
PNG
media_image6.png
101
433
media_image6.png
Greyscale
where ‘FG’ and ‘FD’ are respectively the functional generator 300 and functional discriminator 301 [(that is, ”probability of being real” via a “min-max problem” is indicative of how accurately the second machine learning model distinguished synthetic from measured sensor time series data)]”); and
updating the first machine learning model (Wang ¶ 0038 teaches “providing feedback 302 to the functional generator to retrain the functional generator 300 [(that is, updating the first machine learning model)]”) to generate synthetic sensor time series data that the third machine learning model less accurately distinguishes from measured sensor time series data (Wang ¶ 0017 teaches “[Multi-Projection Functional Neural Network (MPFNN)] is a proposed failure predictive model building technique that is capable of handling the irregularity and temporal aspects within sensor data through the idea of basis projection and the BLUE [Best Linear Unbiased Estimation (BLUE)] technique. [Multi-Projection Functional Neural Network (MPFNN)] tends to have improved failure prediction accuracy (i.e., generate failure warning alerts when and only when a failure is approaching) due to the usage of multiple types of basic functions to more comprehensively represent the failure and non-failure sensor data [(that is, the “feedback” is the second machine learning model less accurately distinguishes from measured sensor time series data)]”).
Regarding claims 3 and 11, the combination of Wang and Soni teaches all of the limitations of claims 1 and 9, respectively, as described above in detail.
Wang teaches -
wherein the first trained machine learning model comprises a generator of a generative adversarial network (Wang ¶ 0016 teaches “F-GAN is a proposed data balancing technique that involves a functional generator [(that is, the first trained machine learning model comprises a generator of a generative adversarial network)] and a functional discriminator, where the generator produces synthetic sensor data corresponding to failure events and the discriminator detects the fake data from the sensor data of actual failures”).
Regarding claims 4 and 12, the combination of Wang and Soni teaches all of the limitations of claims 1 and 9, respectively, as described above in detail.
Wang teaches -
wherein the first trained machine learning model comprises a recurrent neural network model (Wang ¶ 0010 teaches “the last type of technique is the existing Generative Adversarial Network (GAN) models for time series data, including [Continuous Recurrent Neural Networks with Adversarial Training (C-RNN-GAN)] [(that is, the first trained machine learning model comprises a recurrent neural network model)]”).
Regarding claim 5, the combination of Wang and Soni teaches all of the limitations of claim 1, as described above in detail.
Wang teaches -
wherein the synthetic sensor time series data comprises data corresponding to one or more of:
power, voltage, or current supplied to a component of the processing chamber;
pressure; or
temperature (Wang, Fig. 1, teaches “irregularity and heterogeneity among sensor measurements are often introduced by . . . the random data capturing mechanism among many IoT systems [Examiner annotations in dashed-line text boxes]:”
PNG
media_image7.png
739
720
media_image7.png
Greyscale
Wang ¶ 0007 & Fig. 4(A), set out above, teaches “[r]aw sensor data associated with real failure instances” that is “random data capturing mechanism among many IoT systems [(that is, data corresponding to one or more of . . . pressure; or temperature)],” and is input to “functional generator 300,” which outputs “simulated sensor data resemble the pattern in real failure data [(that is, the synthetic sensor time series data comprises data corresponding to one or more of: . . . pressure; or temperature)]”).
Regarding claims 6, 14, and 20, the combination of Wang and Soni teaches all of the limitations of claims 1, 9, and 17, respectively, as described above in detail.
wherein training the second machine learning model comprises:
providing the output synthetic sensor time series data to the second machine learning model as training input; and
providing third data indicative of one or more attributes associated with the output synthetic sensor time series data to the second machine learning model as target output (Wang ¶ 0016 teaches “[t]hese two components [(that is, first machine learning model and second machine learning model)] are trained simultaneously against the error of the discriminator in distinguishing fake data [(that is, providing the output synthetic sensor time series data to the second machine learning model as training input)] from real data [(that is, “real data” is providing first data . . . as target data)], until the error is maximized (i.e., the discriminator cannot tell the difference between fake and real data.” . . . The functional discriminator [(that is, the second machine learning model)] enables F-GAN to generate high-quality sensor time series, as it uses the MPFNN to enhance the discriminator's capability of detecting various sorts of differences between real and fake failure data. This forces the functional generator to improve its capacity in resembling patterns among the real failure data”),
wherein the second machine learning model is configured to predict attributes of the processing chamber based on measured sensor time series data of the processing chamber (Wang ¶ 0016 teaches “[t]he functional discriminator [(that is, the second machine learning model)] is capable of handling the irregularity and temporal aspects within sensor data through the idea of basis projection and the [Best Linear Unbiased Estimation (BLUE)] technique [(that is, “irregularity and temporal aspects” is to predict attributes of the processing chamber based on measured sensor time series data of the processing chamber)]”).
Regarding claims 7 and 15, the combination of Wang and Soni teaches all of the limitations of claims 6 and 15, respectively, as described above in detail.
Wang teaches -
wherein the second machine learning model is configured to detect one or more anomalies associated with measured sensor time series data of the processing chamber (Wang ¶¶ 0013-14 teaches “a failure prediction system equipped with a new AI architecture that effectively and efficiently addresses these challenges [(that is, “failure prediction system” is a second machine learning model is configured to detect one or more anomalies)]. . . . {A] proposed AI model that serves as the core algorithm within failure prediction systems in industrial IoTs [(that is, “industrial IoTs” is measured sensor time series data of the processing chamber)]. The proposed AI model consists of an innovative time series data balancing technique called the Functional Generative Adversarial Network (F-GAN) and a new failure predictive model called the Multi-Projection Functional Neural Network (MPFNN) [(that is, the “F-GAN” is the second machine learning model is configured to detect one or more anomalies)]”).
Regarding claims 8, 16, and 19, the combination of Wang and Soni teaches all of the limitations of claims 1, 16, and 17, respectively, as described above in detail.
wherein an attribute of target synthetic sensor time series data comprises one or more of:
time since installation of the processing chamber;
time since a previous maintenance event of the processing chamber; or
a fault present in the processing chamber (Wang ¶ 0016 teaches “where the generator produces synthetic sensor data corresponding to failure events [(that is, an attribute of target synthetic sensor time series)] and the discriminator detects the fake data from the sensor data of actual failures [(that is, an attribute of target synthetic sensor time series data comprises one or more of: . . . a fault present in the processing chamber)]”).
Regarding claims 10 and 18, the combination of Wang and Soni teaches all of the limitations of claims 9 and 17, respectively, as described above in detail.
Wang teaches -
further comprising:
training the first machine learning model (Wang ¶ 0038 teaches “a functional generator 300 configured to generate multivariate continuous sensor curves 312 from training with arbitrary multivariate sensor data with irregular timestamps 310 received from one or more apparatuses [(that is, training the first machine learning model)]”), wherein training the first machine learning model comprises:
causing the first machine learning model to generate synthetic sensor time series data (Wang ¶ 0045 teaches “a functional processor [403] first deploys the fully connected neural network to map the random noises into random variables following a complex statistical distribution with tunable parameters. Next, the functional processor [403] combines the achieved random variable and the extracted patterns from 401 to produce new realizations of continuous time series that resemble the real sensor data corresponding to failures [(that is, causing the first machine learning model to generate synthetic sensor time series data)]”);
providing the synthetic sensor time series data to a third machine learning model (Wang, Fig. 4(a), teaches “flow of the F-GAN building module 200 [Examiner annotations in dashed-line text boxes]:”
PNG
media_image5.png
575
862
media_image5.png
Greyscale
Wang ¶ 0046 teaches “the functional discriminator 301 [(that is, a second machine learning model)] distinguishes the generated data [(that is, the synthetic sensor time series data)] from the actual data, given the time series with arbitrary granularities”);
providing measured sensor time series data to the third machine learning model, wherein the third machine learning model is configured to distinguish between synthetic sensor time series data and measured sensor time series data (Wang ¶ 0046 teaches “the functional discriminator 301 distinguishes the generated data [(that is, synthetic sensor time series data)] from the actual data [(that is, measured sensor time series data)], given the time series with arbitrary granularities [(that is, providing measured sensor time series data to the second machine learning model, wherein the second machine learning model is configured to distinguish between synthetic sensor time series data and measured sensor time series data)]”; Wang ¶ 0047 teaches “the synthetic and real sensor data corresponding to failure events are provided into the MPFNN-based functional discriminator [301] that attempts to sort out the synthetic failure data;
providing feedback data to the first machine learning model (Wang ¶ 0018 teaches “providing feedback to the functional generator [300] to retrain the functional generator [300] [(that is, providing feedback data to the first machine learning model)]”), indicative of how accurately the third machine learning model distinguished synthetic from measured sensor time series data (Wang ¶ 0047 & Fig. 4(A) teaches “all the parameters in the above procedures are trained to solve the following min-max problem with objective function
PNG
media_image6.png
101
433
media_image6.png
Greyscale
where ‘FG’ and ‘FD’ are respectively the functional generator 300 and functional discriminator 301 [(that is, ”probability of being real” via a “min-max problem” is indicative of how accurately the second machine learning model distinguished synthetic from measured sensor time series data)]”); and
updating the first machine learning model (Wang ¶ 0038 teaches “providing feedback 302 to the functional generator to retrain the functional generator 300 [(that is, updating the first machine learning model)]”) to generate synthetic sensor time series data that the third machine learning model less accurately distinguishes from measured sensor time series data (Wang ¶ 0017 teaches “[Multi-Projection Functional Neural Network (MPFNN)] is a proposed failure predictive model building technique that is capable of handling the irregularity and temporal aspects within sensor data through the idea of basis projection and the BLUE [Best Linear Unbiased Estimation (BLUE)] technique. [Multi-Projection Functional Neural Network (MPFNN)] tends to have improved failure prediction accuracy (i.e., generate failure warning alerts when and only when a failure is approaching) due to the usage of multiple types of basic functions to more comprehensively represent the failure and non-failure sensor data [(that is, the “feedback” is the second machine learning model less accurately distinguishes from measured sensor time series data)]”).
Regarding claim 13, the combination of Wang and Soni teaches all of the limitations of claim 9, as described above in detail.
wherein the synthetic sensor time series data comprises data corresponding to one or more of:
power, voltage, or current supplied to one or more of:
a radio frequency plasma generation component;
a heater; or
a substrate support,
pressure; or
temperature (Wang, Fig. 1, teaches “irregularity and heterogeneity among sensor measurements are often introduced by . . . the random data capturing mechanism among many IoT systems [Examiner annotations in dashed-line text boxes]:”
PNG
media_image7.png
739
720
media_image7.png
Greyscale
Wang ¶ 0007 & Fig. 4(A), set out above, teaches “[r]aw sensor data associated with real failure instances” that is “random data capturing mechanism among many IoT systems [(that is, data corresponding to one or more of . . . pressure; or temperature)],” and is input to “functional generator 300,” which outputs “simulated sensor data resemble the pattern in real failure data [(that is, the synthetic sensor time series data comprises data corresponding to one or more of: . . . pressure; or temperature)]”).
Response to Arguments
10. Examiner has fully considered Applicant’s arguments, and responds below accordingly.
35 U.S.C. § 101
11. Applicant submits that “Applicants respectfully submit that pursuant to the subject matter eligibility analysis under the PEG, the claims are not directed to a judicial exception and are patent eligible for at least the following reasons. While it is not contested that the claims are respectively directed to statutory category of patentable subject matter, the Examiner asserts that claims 2, 6-7, 10, 14-15, 18, and 20 are directed to the judicial exception of an abstract idea. Therefore for at least this reason, the eligibility analysis should end at the first prong of Revised Step 2A.” (Response at pp. 10).
Examiner’s Response:
Without specifically pointing to the language that Applicant considers the Examiner to have erroneously evaluated, the findings of an abstract idea to the respective claims is maintained.
12. Applicant submits that under Step 2A Prong Two, “Dependent claims 2-8, 10-16, and 18-20 are also directed to patent eligible subject matter at least by virtue of their respective dependencies from claim 1. Similar language is also included in independent claims 9 and 17.” (Response at p. 11).
Examiner’s Response:
Examiner respectfully disagrees because though an independent claim may be subject-matter eligible, a claim depending therefrom may be subject-matter ineligible.
Specifically, “even if an independent claim is determined to be eligible, a dependent claim may be ineligible because it adds a judicial exception without also adding limitations that integrate the judicial exception or provide significantly more. Thus, each claim in an application should be considered separately based on the particular elements recited therein.” (MPEP § 2106.07).
Accordingly, dependent claims 2, 6, 7, 10, 14, 15, 18, and 20 are subject-matter ineligible for the reasons set out hereinabove.
35 U.S.C. § 103
13. Applicant submits that “Soni does not remedy the shortcomings of Wang with respect to claim 1, as amended. Soni is directed to medical machine synthetic data and event generation. (Soni, Abstract.) Soni teaches a "data generator" in connection with "synthetic data" that "can be multi-channel data and can include a channel in which synthetic events, labels, and/or other annotations associated with the series data is also generated." Soni, paragraph [0091]). However, Soni is silent regarding "one or more labels identifying one or more of a data source of interest comprising a sensor type, sensor location, or processing recipe associated with a processing chamber, or a state of interest of the processing chamber." (Response at pp. 12-13).
Examiner’s Response:
Examiner respectfully disagrees because the claim as presented covers the teachings of Soni as set out hereinabove regarding, inter alia, Table 1 of Soni.
Also, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. Where a rejection of a claim is based on two or more references, a reply that is limited to what a subset of the applied references teaches or fails to teach, or that fails to address the combined teaching of the applied references may be considered to be an argument that attacks the reference(s) individually, as is the case here with the cited prior art of Soni. MPEP § 2145.IV.
Moreover, the rejections hereinabove clearly sets forth which claim limitations are taught by each of the prior art references, and the reason why it would be obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant's invention to combine their teachings, and Applicant has not explained why the cited prior art references cannot be combined in the manner set forth in the rejection.
Conclusion
14. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
(US Published Application 20200294224 to Shaubi et al.) teaches process a fabrication process (FP) sample, wherein the FP sample comprises first FP image(s) received from first examination modality(s) and second FP image(s) received from second examination modality(s) which differs from the first examination modality(s). The trained DNN processes the first FP image(s) separately from the second FP image(s), and further processing by the trained DNN the results of such separate processing to obtain examination-related data specific for the given application and characterizing at least one of the processed FP images.
(US Published Application 20210405544 to Werkman et al.) teaches obtaining a training data set including synthetic metrology data, the training data set being configured for training of a model relating to a manufacturing process for manufacturing an integrated circuit. The method includes obtaining behavioral property data describing a behavior of a process parameter resultant from the manufacturing process and/or a related tool or effect.
(US Published Application 20200243359 to Hao et al.) teaches trains a neural network by feeding a first set of input time-series data of one or more sensors of a first processing chamber that is within specification to the neural network to produce a corresponding first set of output time-series data. The server calculates a first error. The server feeds a second set of input time-series data from corresponding one or more sensors associated with a second processing chamber under test to the trained neural network to produce a corresponding second set of output time-series data. The server calculates a second error. Responsive to the difference between a second error between the second set of input time-series data and the corresponding second set of output time-series data and a first error between the first set of input time-series data and the corresponding first set of output time-series data being equal to or exceeding a threshold amount,
15. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEVIN L. SMITH whose telephone number is (571) 272-5964. Normally, the Examiner is available on Monday-Thursday 0730-1730.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, KAKALI CHAKI can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.L.S./
Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122