Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to the Application filed on 12/10/2025
Claims 1-23 are pending in the case. Claims 1, 10, 14, 18 and 21 are independent claims. Claims 1, 10, 14, 18 and 21 have been currently amended.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1-23 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Applicant is advised to consult the 2019 PEG for more details of the analysis.
Step 1 Analysis: Is the claim to a process, machine, manufacture or composition of matter? See MPEP § 2106.03.
Claim(s) 1-13 and 21-23 are drawn to a processor-implemented method claims 14-20 are drawn to an electronic apparatus, therefore each of these claim groups falls under one of four categories of statutory subject matter (machine/products/apparatus, process/method, manufactures and compositions of mater; Step 1). Nonetheless, the claims are directed to a judicially recognized exception of an abstract idea without significant more (Step 2A, see below). Independent claims 1 and 10 are nonverbatim but similar in claim construction, hence share the same rationale that the claimed inventions are directed to non-statutory subject matter as follows:
Regarding claim 1:
Claim 1 recites: A processor-implemented method, comprising: generating first prediction data based on first data by inputting the first data to a trained auxiliary prediction model trained based on clean data including a clean label without noise;
estimating a first uncertainty by inputting the first data and a first label to a primary uncertainty model, wherein the first uncertainty is obtained as an output of the primary uncertainty model based on a correlation between the first data and the first label including noise;
determining a first uncertainty loss based on the generated first prediction data, the first label, and the estimated first uncertainty;
training the primary uncertainty model based on the determined first uncertainty loss;
generating second prediction data based on second data by inputting the second data to a primary prediction model;
estimating a second uncertainty by inputting the second data and a second label to the trained primary uncertainty model, wherein the second uncertainty is obtained as an output of trained primary uncertainty model based on a correlation between the second data and the second label;
determining a second uncertainty loss based on the generated second prediction data, the second label, and the estimated second uncertainty; and
training the primary prediction model based on the determined second uncertainty loss to improve prediction performance in presence of noise included in label of training data
Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1).
Claim 1 is directed to an abstract idea, specifically, a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). As well as, a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C).
Independent claim 1 recites in part:
“estimating a first uncertainty by inputting the first data and a first label to a primary uncertainty model, wherein the first uncertainty is obtained as an output of the primary uncertainty model based on a correlation between the first data and the first label including noise”
The limitation above is broadly and reasonable interpreted as a mental concept that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of finding or determining an uncertainty of an input, based on first data and first label, which is an observation or evaluation that is practically capable of being performed in the human mind with the assistance of pen and paper. See MPEP § 2106.04(a)(2)(III).
.
“determining a first uncertainty loss based on the generated first prediction data, the first label, and the estimated first uncertainty”
The limitation above is broadly and reasonably interpreted as a mathematical concept and mental process. See MPEP § 2106.04(a)(2)(I)(C) & See MPEP § 2106.04(a)(2)(III). A loss is, by definition in ML, a numerical function, a cost value (e.g., variance-weighted loss, likelihood loss, entropy -based loss). The step also fits the definition of a mental process. It is an evaluation (assessing how “good a prediction is). Therefore, a human can mentally make a determination based on known inputs.
“estimating a second uncertainty by inputting the second data and a second label to the trained primary uncertainty model, wherein the second uncertainty is obtained as an output of trained primary uncertainty model based on a correlation between the second data and the second label”
The limitation above is broadly and reasonable interpreted as a mental concept that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of finding or determining an uncertainty of an input, based on second data and second label, which is an observation or evaluation that is practically capable of being performed in the human mind with the assistance of pen and paper. See MPEP § 2106.04(a)(2)(III).
“determining a second uncertainty loss based on the generated second prediction data, the second label, and the estimated second uncertainty”
The limitation above is broadly and reasonably interpreted as a mathematical concept and mental process. See MPEP § 2106.04(a)(2)(I)(C) & See MPEP § 2106.04(a)(2)(III). A loss is, by definition in ML, a numerical function, a cost value (e.g., variance-weighted loss, likelihood loss, entropy -based loss). The step also fits the definition of a mental process. It is an evaluation (assessing how “good a prediction is). Therefore, a human can mentally make a determination based on known inputs.
Step 2A Prong Two Analysis: Does the claim recite additional elements that integrate the judicial exception into a practical application? See MPEP § 2106.04(d).
Independent claim 1 recites in part:
A processor-implemented method, comprising: as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract ide
generating first prediction data based on first data by inputting the first data to a trained auxiliary prediction model trained based on clean data including a clean label without noise, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
generating second prediction data based on second data by inputting the second data to a primary prediction model, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution activity, a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea when considered as an ordered combination and as a whole.
Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05.
First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness.
Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness.
Independent claim 1 recites in part:
A processor-implemented method, comprising: as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract ide
generating first prediction data based on first data by inputting the first data to a trained auxiliary prediction model trained based on clean data including a clean label without noise, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
generating second prediction data based on second data by inputting the second data to a primary prediction model, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution activity, a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter.
Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”.
As to claim 10:
Claim 10 recites: A processor implemented method, the method comprising:
generating prediction data based on input data of a plurality of prediction models respectively by implementing the plurality of prediction models;
determining a plurality of uncertainty values respectively by inputting the generated prediction data and the input data to uncertainty models, wherein the plurality of uncertainty values are obtained as a respective output of a corresponding uncertainty model of the uncertainty models based on a correlation between the input data and a label associated with the input data;
determining weights corresponding to the generated prediction data based on the determined uncertainty values;
applying the determined weights to the corresponding generated prediction data; and
determining final prediction data based on results of the applying to improve predictive performance in the presence of label noise
Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1).
Claim 10 is directed to an abstract idea, specifically, a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). As well as, a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C).
Independent claim 10 recites in part:
“determining a plurality of uncertainty values respectively by inputting the generated prediction data and the input data to uncertainty models, wherein the plurality of uncertainty values are obtained as a respective output of a corresponding uncertainty model of the uncertainty models based on a correlation between the input data and a label associated with the input data”
The limitation above is broadly and reasonable interpreted as a mental concept that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of finding or determining an uncertainty of an input, based on first data and first label, which is an observation or evaluation that is practically capable of being performed in the human mind with the assistance of pen and paper. See MPEP § 2106.04(a)(2)(III).
“determining weights corresponding to the generated prediction data based on the determined uncertainty values”
The limitation above is broadly and reasonably interpreted as a mathematical concept and mental process. See MPEP § 2106.04(a)(2)(I)(C) & See MPEP § 2106.04(a)(2)(III). A loss is, by definition in ML, a numerical function, a cost value (e.g., variance-weighted loss, likelihood loss, entropy -based loss). The step also fits the definition of a mental process. It is an evaluation (assessing how “good a prediction is). Therefore, a human can mentally make a determination based on known inputs.
“determining final prediction data based on results of the applying to improve predictive performance in the presence of label noise”
The limitation above is broadly and reasonably interpreted as a mental process. The step also fits the definition of a mental process. It is an evaluation (assessing how “good a prediction is). Therefore, a human can mentally make a determination based on known inputs. See MPEP § 2106.04(a)(2)(III).
Independent claim 10 recites in part:
A processor : as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
generating prediction data based on input data of a plurality of prediction models respectively by implementing the plurality of prediction models, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea when considered as an ordered combination and as a whole.
Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05.
First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness.
Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness.
Independent claim 10 recites in part:
A processor implemented method, the method comprising, : as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
generating prediction data based on input data of a plurality of prediction models respectively by implementing the plurality of prediction models, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter.
Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”.
As to claim 14
Claim 14 recites: An electronic apparatus comprising:
a memory, configured to store a trained auxiliary prediction model, a primary uncertainty model, and a primary prediction model; and
a processor, configured to:
generate first prediction data on first data by inputting the first data to the trained auxiliary prediction model,
estimate a first uncertainty by inputting the first data and a first label to the primary uncertainty model, wherein the first uncertainty is obtained as an output of the primary uncertainty model based on a correlation between the first data and the first label,
determine a first uncertainty loss based on the generated first prediction data, the first label, and the estimated first uncertainty,
train the primary uncertainty model based on the determined first uncertainty loss,
generate second prediction data based on second data by inputting the second data to the primary prediction model,
estimate a second uncertainty by inputting the second data and a second label to the trained primary uncertainty model, wherein the second uncertainty is obtained as an output of trained primary uncertainty model based on a correlation between the second data and the second label,
determine a second uncertainty loss based on the generated second prediction data, the second label, and the estimated second uncertainty, and
train the primary prediction model based on the determined second uncertainty loss to improve prediction performance in presence of noise included in label of training data.
Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1).
Claim 14 is directed to an abstract idea, specifically, a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). As well as, a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C).
Independent claim 14 recites in part:
“estimate a first uncertainty by inputting the first data and a first label to the primary uncertainty model, wherein the first uncertainty is obtained as an output of the primary uncertainty model based on a correlation between the first data and the first label”
The limitation above is broadly and reasonable interpreted as a mental concept that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of finding or determining an uncertainty of an input, based on first data and first label, which is an observation or evaluation that is practically capable of being performed in the human mind with the assistance of pen and paper. See MPEP § 2106.04(a)(2)(III).
“determine a first uncertainty loss based on the generated first prediction data, the first label, and the estimated first uncertainty”
The limitation above is broadly and reasonably interpreted as a mathematical concept and mental process. See MPEP § 2106.04(a)(2)(I)(C) & See MPEP § 2106.04(a)(2)(III). A loss is, by definition in ML, a numerical function, a cost value (e.g., variance-weighted loss, likelihood loss, entropy -based loss). The step also fits the definition of a mental process. It is an evaluation (assessing how “good a prediction is). Therefore, a human can mentally make a determination based on known inputs.
estimate a second uncertainty by inputting the second data and a second label to the trained primary uncertainty model, wherein the second uncertainty is obtained as an output of trained primary uncertainty model based on a correlation between the second data and the second label
The limitation above is broadly and reasonable interpreted as a mental concept that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of finding or determining an uncertainty of an input, based on second data and second label, which is an observation or evaluation that is practically capable of being performed in the human mind with the assistance of pen and paper. See MPEP § 2106.04(a)(2)(III).
determine a second uncertainty loss based on the generated second prediction data, the second label, and the estimated second uncertainty
The limitation above is broadly and reasonably interpreted as a mathematical concept and mental process. See MPEP § 2106.04(a)(2)(I)(C) & See MPEP § 2106.04(a)(2)(III). A loss is, by definition in ML, a numerical function, a cost value (e.g., variance-weighted loss, likelihood loss, entropy -based loss). The step also fits the definition of a mental process. It is an evaluation (assessing how “good a prediction is). Therefore, a human can mentally make a determination based on known inputs.
Independent claim 14 recites in part:
An electronic apparatus comprising:
a memory, configured to store a trained auxiliary prediction model, a primary uncertainty model, and a primary prediction model; and
a processor, configured to: as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract ide
generate first prediction data on first data by inputting the first data to the trained auxiliary prediction model, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
generate second prediction data based on second data by inputting the second data to the primary prediction model, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea when considered as an ordered combination and as a whole.
Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05.
First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness.
Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness.
Independent claim 14 recites in part:
An electronic apparatus comprising:
a memory, configured to store a trained auxiliary prediction model, a primary uncertainty model, and a primary prediction model; and
a processor, configured to: as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract ide
generate first prediction data on first data by inputting the first data to the trained auxiliary prediction model, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
generate second prediction data based on second data by inputting the second data to the primary prediction model, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter.
Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”.
As to claim 18
Claim 18 recites: An electronic apparatus comprising: a memory configured to store a plurality of prediction models and a plurality of uncertainty models; and
a processor configured to: generate prediction data based on input data of the prediction models respectively by implementing the prediction models,
determine a plurality of uncertainty values respectively by inputting the generated prediction data and the input data to uncertainty models, wherein the plurality of uncertainty values are obtained as a respective output of a corresponding uncertainty model of the uncertainty models based on a correlation between the input data and corresponding labels,
determine weights corresponding to the generated prediction data based on the determined uncertainty values,
apply the determined weights to the corresponding generated prediction data,
and
determine final prediction data based on results of the applying to improve predictive accuracy in the presence of noisy labels
Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1).
Claim 18 is directed to an abstract idea, specifically, a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). As well as, a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C).
Independent claim 18 recites in part:
determine a plurality of uncertainty values respectively by inputting the generated prediction data and the input data to uncertainty models, wherein the plurality of uncertainty values are obtained as a respective output of a corresponding uncertainty model of the uncertainty models based on a correlation between the input data and corresponding labels
The limitation above is broadly and reasonable interpreted as a mental concept that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of finding or determining an uncertainty of an input, based on second data and second label, which is an observation or evaluation that is practically capable of being performed in the human mind with the assistance of pen and paper. See MPEP § 2106.04(a)(2)(III).
determine weights corresponding to the generated prediction data based on the determined uncertainty values
The limitation above is broadly and reasonably interpreted as a mathematical concept and mental process. See MPEP § 2106.04(a)(2)(I)(C) & See MPEP § 2106.04(a)(2)(III). A loss is, by definition in ML, a numerical function, a cost value (e.g., variance-weighted loss, likelihood loss, entropy -based loss). The step also fits the definition of a mental process. It is an evaluation (assessing how “good a prediction is). Therefore, a human can mentally make a determination based on known inputs.
determine final prediction data based on results of the applying to improve predictive accuracy in the presence of noisy labels
The limitation above is broadly and reasonably interpreted as a mental process. The step also fits the definition of a mental process. It is an evaluation (assessing how “good a prediction is). Therefore, a human can mentally make a determination based on known inputs. See MPEP § 2106.04(a)(2)(III).
Independent claim 18 recites in part:
An electronic apparatus comprising: a memory configured to store a plurality of prediction models and a plurality of uncertainty models; and
a processor configured to: as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract ide
generate prediction data based on input data of the prediction models respectively by implementing the prediction models, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea when considered as an ordered combination and as a whole.
Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05.
First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness.
Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness.
Independent claim 18 recites in part:
An electronic apparatus comprising: a memory configured to store a plurality of prediction models and a plurality of uncertainty models; and
a processor configured to: as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract ide
generate prediction data based on input data of the prediction models respectively by implementing the prediction models, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter.
Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”.
As to claim 21
Claim 21 recites: A processor-implemented method, comprising:
generating prediction data by inputting first data to one of an auxiliary prediction model;
estimating an aleatoric uncertainty by inputting the first data and clean label data to an auxiliary uncertainty model, wherein the aleatoric uncertainty is obtained as an output of the auxiliary uncertainty model based on a correlation between the first data and the clean label data;
calculating an uncertainty loss based on the clean label data, the generated prediction data, and the estimated aleatoric uncertainty, and
training the auxiliary prediction model and the auxiliary uncertainty model based on the uncertainty loss to improve prediction performance in presence of noise included in label of training data
Step 2A Prong One Analysis: Does the claim recite an abstract idea, law of nature, or natural phenomenon? See MPEP § 2106.04(II)(A)(1).
Claim 21 is directed to an abstract idea, specifically, a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). As well as, a mathematical concept, when the claim recites," a mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number." See MPEP § 2106.04(a)(2)(I)(C).
Independent claim 21 recites in part:
estimating an aleatoric uncertainty by inputting the first data and clean label data to an auxiliary uncertainty model, wherein the aleatoric uncertainty is obtained as an output of the auxiliary uncertainty model based on a correlation between the first data and the clean label data
The limitation above is broadly and reasonable interpreted as a mental concept that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion). See MPEP § 2106.04(a)(2)(III). Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of finding or determining an uncertainty of an input, based on second data and second label, which is an observation or evaluation that is practically capable of being performed in the human mind with the assistance of pen and paper. See MPEP § 2106.04(a)(2)(III).
calculating an uncertainty loss based on the clean label data, the generated prediction data, and the estimated aleatoric uncertainty
The limitation above is broadly and reasonably interpreted as a mathematical concept, when the claim recites," a mathematical calculation, the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP § 2106.04(a)(2)(I)(C).
Independent claim 21 recites in part:
A processor-implemented method, comprising:, as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract ide
generating prediction data by inputting first data to one of an auxiliary prediction model, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, these additional elements are recited in a manner that represent no more than mere instructions to apply the judicial exceptions on a computer. See MPEP § 2106.05(f) and § 2106.04(d).
Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea when considered as an ordered combination and as a whole.
Step 2B Analysis: Does the claim recite additional elements that amount to significantly more than the judicial exception? See MPEP § 2106.05.
First, the additional elements directed to generally linking the use of a judicial exception to a particular technological environment or field of use are deemed insufficient to transform the judicial exception to a patentable invention because the claimed limitations generally link the judicial exception to the technology environment, see MPEP 2106.05(h). However, they are included below for the sake of completeness.
Second, the additional elements mere application of the abstract idea or mere instructions to implement an abstract idea on a computer are deemed insufficient to transform the judicial exception to a patentable invention because the limitations generally apply the use of a generic computer and/or process with the judicial exception. See MPEP 2106.05(f). However, they are included below for the sake of completeness.
Independent claim 21 recites in part:
A processor-implemented method, comprising:, as drafted, amount to a judicial exception is not integrated into a practical application. In particular, the claims only recites generic computing components . Such generic computing components are recited at a high-level of generality (i.e., as a generic processor performing data gathering and mathematical calculations) such that they amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract ide
generating prediction data by inputting first data to one of an auxiliary prediction model, as drafted, amounts to “insignificant extra-solution activity”, a step of generating an output from received data (post-solution a step occurring after result is obtained). See MPEP §§ 2106.04(d), 2106.05(g).
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. The claims are not eligible subject matter.
Therefore, in examining elements as recited by the limitations individually and as an ordered combination, as a whole the independent claim limitations do not recite what have the courts have identified as “significantly more”.
Furthermore, regarding dependent claims 2-9 which are dependent on claim 1, claims 11- 13 which are dependent on claim 10, claims 15-17 which are dependent on claim 14, claims 19- 20 which are dependent on claim 18 and claims 22-23 which are dependent on claim 21, the claims are directed to a judicial exception without significantly more as highlighted below in the claim limitations by evaluating the claim limitations under Step 2A and 2B:
Claims 2 and 15 are dependent on claims 1 and 14 respectively, no additional limitation recited integrate the judicial exception into a practical application.
Incorporates the rejection of independent claims 1 and 14 and all elements are part of the abstract idea as shown above
No additional limitation recited integrate the judicial exception into a practical application
NO additional limitation recited integrate the judicial exception into a practical application and no additional element recognized as well-understood, routine, and conventional.
Claims 3 and 16 are dependent on claims 1 and 14 respectively, and no additional limitation recited integrate the judicial exception into a practical application.
Claim 4 and 17 are dependent on claim 3 and 16 respectively, and include additional element that amounts to adding insignificant extra-solution activity to the judicial exception. See MPEP §§ 2106.04(d), 2106.05(g).
Claims 5 and 6 are dependent on claim 3, and incorporates the rejection of independent claim 3 and all elements are part of the abstract idea as shown above.
Claim 7 is dependent on claim 3, and incorporates the rejection of independent claim 3 and all elements are part of the abstract idea as shown above
Claim 8 is dependent on claim 1, and include no additional limitation recited integrate the judicial exception into a practical application
Claim 9 is dependent on claim 1, and include no additional limitation recited integrate the judicial exception into a practical application
Claims 11 and 19 are dependent on claims 10 and 18 respectively, and include incorporates the rejection of independent claims 10 and 18 and all elements are part of the abstract idea as shown above.
Claims 12 and 20 are dependent on claims 10 and 18 respectively, and include incorporates the rejection of independent claims 10 and 18 and all elements are part of the abstract idea as shown above.
Claim 13 is dependent on claim 10, and include no additional limitation recited integrate the judicial exception into a practical application.
Claim 22 is dependent on claim 21, and include no additional limitation recited integrate the judicial exception into a practical application.
Claim 23 is dependent on claim 21, and recites integrate the judicial exception into a practical application and no additional element recognized as well-understood, routine, and conventional.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 10-11 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over TAKADA et al. (Pub No.: 20220076161 A1), hereinafter referred to as TAKADA, in view of TAKAMATSU et al. (US Pub No.: 20220230096 A1), hereinafter referred to as TAKAMATSU.
With respect to claim 10, TAKADA discloses:
A processor implemented method, the method comprising: generating prediction data based on input data of a plurality of prediction models respectively by implementing the plurality of prediction models (In Fig. 3 and paragraph [0080], TAKADA discloses a prediction model generating unit configured to generate a plurality of prediction models using the plurality of training data.)
determining a plurality of uncertainty values respectively by inputting the generated prediction data and the input data to uncertainty models, wherein the plurality of uncertainty values are obtained as a respective output of a corresponding uncertainty model of the uncertainty models based on a correlation between the input data and a label associated with the input data (In paragraph [0089], TAKADA discloses the learning processing combination determining unit to evaluate the prediction accuracy. The learning processing combination of determining units inputs the sample data for evaluation to the first level prediction model to calculate predicted values, and inputs data including meta-features generated from the predicted values to the second level prediction models. The learning processing combination determining unit evaluates the prediction accuracy on the basis of errors between predicted values obtained from the second level prediction models and the value of the target variable.)
determining weights corresponding to the generated prediction data based on the determined uncertainty values (In paragraph [0089], TAKADA discloses that the learning unit 115 checks how accurate the predictions are by comparing the predicted values from the second level model 150 to the actual target values.)
With respect to claim 10, TAKADA do not explicitly disclose:
applying the determined weights to the corresponding generated prediction data
determining final prediction data based on results of the applying to improve predictive performance in the presence of label noise
However, TAKAMATSU is known to disclose:
Applying the determined weights to the corresponding generated prediction data (In paragraph [0196], TAKAMATSU discloses that the prediction unit 23 looks at each prediction model, its weight, and the prediction data to make predictions. It calculates the chance of each renewal target withdrawing in a specific month using the data with each model. This means that for every renewal target, it predicts several withdrawal chances for each model.)
Determining final prediction data based on results of the applying to improve predictive performance in the presence of label noise (In paragraph [0197], TAKAMATSU discloses that Unit 23 predicts the average chance of withdrawal for each renewal target by using the weights for each prediction model. This gives the final withdrawal chance for each renewal target.)
TAKADA and TAKAMATSU are analogous pieces of art because all references concern predicting models for tasks assigned as objective variables. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify TAKADA, with calculating an ultimate predicted value on the basis of predicted values of the plurality of prediction models as taught by TAKADA, with applying the prediction data to each prediction model as taught by TAKAMATSU. The motivation for doing so would have been to improve prediction accuracy of a prediction model (See [0005] of TAKAMATSU).
Regarding claim 11, TAKADA in view of TAKAMATSU discloses elements of claim 10. In addition, TAKAMATSU disclose:
The method of claim 10, wherein the determining of the final prediction data comprises determining the final prediction data by ensembling results of the applying of the determined weights to the corresponding generated prediction data (In paragraph [0197], TAKAMATSU discloses determining the final withdrawal chance for each renewal target using the determined weights for the corresponding prediction model.)
With respect to claim 18, TAKADA discloses:
An electronic apparatus comprising: a memory configured to store a plurality of prediction models and a plurality of uncertainty models (In paragraph [0074], TAKADA disclose a storage unit configured to store a plurality of first training data including a plurality of sample data including values of a plurality of feature variables and a prediction correct value of the event. In Fig. 3 and paragraph [0080], TAKADA discloses a prediction model generating unit configured to generate a plurality of prediction models using the plurality of training data.)
A processor configured to: generate prediction data based on input data of the prediction models respectively by implementing the prediction models (In paragraph [0074], TAKADA disclose generating a prediction model for calculating an ultimate predicted value on the basis of predicted values of the plurality of prediction models )
determine a plurality of uncertainty values respectively by inputting the generated prediction data and the input data to uncertainty models, wherein the plurality of uncertainty values are obtained as a respective output of a corresponding uncertainty model of the uncertainty models based on a correlation between the input data and corresponding labels, determine (In paragraph [0089], TAKADA discloses the learning processing combination determining unit to evaluate the prediction accuracy. The learning processing combination of determining units inputs the sample data for evaluation to the first level prediction model to calculate predicted values, and inputs data including meta-features generated from the predicted values to the second level prediction models. The learning processing combination determining unit evaluates the prediction accuracy on the basis of errors between predicted values obtained from the second level prediction models and the value of the target variable.)
determine weights corresponding to the generated prediction data based on the determined uncertainty values (In paragraph [0089], TAKADA discloses that the learning unit 115 checks how accurate the predictions are by comparing the predicted values from the second level model 150 to the actual target values.)
With respect to claim 18, TAKADA do not explicitly disclose:
apply the determined weights to the corresponding generated prediction data
determine final prediction data based on results of the applying to improve predictive accuracy in the presence of noisy labels
However, TAKAMATSU is known to disclose:
Apply the determined weights to the corresponding generated prediction data (In paragraph [0196], TAKAMATSU discloses that the prediction unit 23 looks at each prediction model, its weight, and the prediction data to make predictions. It calculates the chance of each renewal target withdrawing in a specific month using the data with each model. This means that for every renewal target, it predicts several withdrawal chances for each model.)
determine final prediction data based on results of the applying to improve predictive accuracy in the presence of noisy labels (In paragraph [0197], TAKAMATSU discloses that Unit 23 predicts the average chance of withdrawal for each renewal target by using the weights for each prediction model. This gives the final withdrawal chance for each renewal target.)
TAKADA and TAKAMATSU are analogous pieces of art because all references concern predicting models for tasks assigned as objective variables. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify TAKADA, with calculating an ultimate predicted value on the basis of predicted values of the plurality of prediction models as taught by TAKADA, with applying the prediction data to each prediction model as taught by TAKAMATSU. The motivation for doing so would have been to improve prediction accuracy of a prediction model (See [0005] of TAKAMATSU).
Regarding claim 19, TAKADA in view of TAKAMATSU discloses elements of claim 18. In addition, TAKAMATSU disclose:
The electronic apparatus of claim 18, wherein the processor is configured to determine the final prediction data by ensembling results of the applying of the determined weights to the corresponding generated prediction data (In paragraph [0197], TAKAMATSU discloses determining the final withdrawal chance for each renewal target using the determined weights for the corresponding prediction model.)
Claims 12-13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over TAKADA in view of TAKAMATSU and further in view of Wittenbach et al. (US Pub No.: 20220067737 A1), hereinafter referred to as Wittenbach.
Regarding claim 12, TAKADA in view of TAKAMATSU disclose elements of claim 10. TAKADA in view of TAKAMATSU do not appear to explicitly disclose:
The method of claim 10, wherein the determining of the weights corresponding to the generated prediction data comprises: when a first uncertainty value determined based on first prediction data and the input data, by a first uncertainty model is less than a second uncertainty value determined, based on second prediction data and the input, by a second uncertainty model, determining a weight of the first prediction data to be higher than a weight of the second prediction data
However, Wittenbach disclose the limitation (In paragraph [0038], Wittenbach discloses the aleatoric uncertainty score from the predictive model and the input data is below a threshold, the server continues using the model. In paragraph [0039], disclose the server continues, wherein the epistemic uncertainty score from the second prediction data and input data. In paragraph [0042], they further disclose the aleatoric value being the highest value within the probabilistic distribution.)
Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of TAKADA in view of TAKAMATSU to include Wittenbach, generating a set of predictions associated with the applied set of parameters and calculating a first uncertainty score and a second uncertainty score associated with the generated set of predictions. The motivation for doing so would have been to improve the predictive accuracy (See [0057] of Wittenbach.)
Regarding claim 13, TAKADA in view of TAKAMATSU disclose elements of claim 10. TAKADA in view of TAKAMATSU do not appear to explicitly disclose:
The method of claim 10, wherein each of the uncertainty values comprises an aleatoric uncertainty value
However, Wittenbach disclose the limitation (In paragraph [0036], Wittenbach discloses that the probability of each individual sample may represent an estimate of aleatoric uncertainty)
Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of TAKADA in view of TAKAMATSU to include Wittenbach, generating a set of predictions associated with the applied set of parameters and calculating a first uncertainty score and a second uncertainty score associated with the generated set of predictions. The motivation for doing so would have been to improve the predictive accuracy (See [0057] of Wittenbach.)
Regarding claim 20, TAKADA in view of TAKAMATSU disclose elements of claim 18. TAKADA in view of TAKAMATSU do not appear to explicitly disclose:
The electronic apparatus of claim 18, wherein: when a first uncertainty value determined based on first prediction data and the input data, by a first uncertainty model is less than a second uncertainty value determined, based on second prediction data and the input, by a second uncertainty model, the processor is configured to determine a weight of the first prediction data to be higher than a weight of the second prediction data
However, Wittenbach disclose the limitation (In paragraph [0038], Wittenbach discloses the aleatoric uncertainty score from the predictive model and the input data is below a threshold, the server continues using the model. In paragraph [0039], disclose the server continues, wherein the epistemic uncertainty score from the second prediction data and input data. In paragraph [0042], they further disclose the aleatoric value being the highest value within the probabilistic distribution.)
Accordingly, it would have been obvious to a person having ordinary skills in the art before the effective filling date of the claimed invention, having the teaching of TAKADA in view of TAKAMATSU to include Wittenbach, generating a set of predictions associated with the applied set of parameters and calculating a first uncertainty score and a second uncertainty score associated with the generated set of predictions. The motivation for doing so would have been to improve the predictive accuracy (See [0057] of Wittenbach.)
Claims 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over LI et al. (Pub No.: 20210089883 A1), hereinafter referred to as LI, in view of LEE et al. (US Pub No.: 20200184312 A1), hereinafter referred to as LEE.
With respect to claim 21, LI discloses:
Calculating an uncertainty loss based on the clean label data, the generated prediction data, and the estimated aleatoric uncertainty (In Fig. 3 and paragraph [0041], LI discloses training a first neural network (network A), the "Co-Divide Unit" 304 assess the loss (the difference between predicted and actual outcomes) on individual samples. The first labeled set (mostly clean samples) and second unlabeled set (mostly clean samples).))
Training the auxiliary prediction model and the auxiliary uncertainty model based on the uncertainty loss (In Fig. 3 and paragraph [0042], LI discloses the "current epoch unit" 306, further training the first neural network (network A) based on the output from the loss distribution, at the co-divide unit 304, first network A model.)
With respect to claim 21, LI do not appear to disclose:
A processor-implemented method, comprising: generating prediction data by inputting first data to one of an auxiliary prediction model
estimating an aleatoric uncertainty by inputting the first data and clean label data to an auxiliary uncertainty model, wherein the aleatoric uncertainty is obtained as an output of the auxiliary uncertainty model based on a correlation between the first data and the clean label data
However, it is known by LEE to disclose:
A processor-implemented method, comprising: generating prediction data by inputting first data to one of an auxiliary prediction model (Based on examiners' broadest reasonable interpretation (BRI) and lack of details, the "auxiliary prediction model" is thought to be a model designed to make a prediction based on data provided. In Fig. 3 and paragraphs [0043-0044], LEE discloses the uncertainty prediction apparatus generating a predicted output based on the sampling weight of sampling model 1 (110- 1).)
Estimating an aleatoric uncertainty by inputting the first data and clean label data to an auxiliary uncertainty model, wherein the aleatoric uncertainty is obtained as an output of the auxiliary uncertainty model based on a correlation between the first data and the clean label data ((In paragraphs [0041-0049], LEE disclose the overall text describes a system for predicting uncertainty in the outputs of an ANN model, quantifying and conveying the uncertainty associated with predictions made by the ANN. In detail, Fig. 3 and paragraph [0041], LEE disclose the ANN trained on labeled data (produced outputs). FIG. 3 and paragraph [0042] disclose additional ANN models (110-1, 110-2, 110-K) derived from the original model 10. Each model representing different possible uncertainty the prediction can be. In FIG. 3 and paragraph [0047], disclose the generation unit 120 outputting from both the main ANN 10 and the sampling models (110-1, 110-2, 110-K). It aggregates the outputs to output a final result that incorporates the uncertainty of the prediction. It includes an "uncertainty calculation unit" which collects outputs from the sampling models. In paragraph [0048-0049], they disclose that the outputs from sampling models are multidimensional vectors (specifically, they represent probabilities associated with multiple labels, such as 0–9 in the case of the MNIST dataset). The uncertainty calculation unit sums these probabilities across the different sampling models to a comprehensive representation of uncertainty.))
LI and LEE are analogous pieces of art because both references concern the method of providing an uncertainty prediction apparatus including an artificial neural network model. Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify LI, assessing the loss of the networks on individual samples taught by LI with a deep learning-based artificial intelligence learning model that is trained and generated by receiving labeled training data and generating an output value in proximity to a label value as taught by LEE. The motivation for doing so would have been to improve the prediction accuracy of a prediction model (See [0005] of LEE.)
Regarding claim 22, LI in view of LEE discloses elements of claim 21. In addition, LI disclose:
The method of claim 21, wherein the uncertainty loss is calculated based on a difference between the clean label data and the generated prediction data (In paragraph [0041], LI disclose a loss (difference between predicted and actual outcomes).)
Regarding claim 23, LI in view of LEE discloses elements of claim 21. In addition, LI disclose:
The method of claim 21, wherein the clean label data is data without noise (In paragraph [0041], LI disclose a first network A and second network B with labeled set are mostly clean.)
Allowable Subject Matter
Claim(s) 1-9 and 14-17 are allowed. However, 35 USC 101 in view of claims 1-9 and 14-17 still remain.
Response to Arguments
Applicant's arguments filed 12/10/2025 have been fully considered, but part were not persuasive
In reference to rejection under 35 USC 101
The examiner still believes claims 1-23 are directed towards an abstract idea. The claims fall under 35 USC 101 because they recites steps that compute uncertainty, correlations, losses and weights, which are mathematical relationships rather than physical or technological relationships. The claims also involve evaluating data and labels to determine confidence and weighting, which are mental process that can conceptually performed by a human, even if practically done with pen and paper. The use of processor and models merely applies to abstract calculations on a generic computer without changing how the computer itself operates. The claim focus on what information is calculated (predictions, uncertainty, loss) rather than how computer technology is improved at a hardware or system level. As a result, the claims are directed to abstract ideas. Arguments are not persuasive and a full 101 analysis is set forth above.
In reference to rejection under 35 USC 103
Applicant’s arguments in regard to the examiner’s rejections under 35 USC 103 are moot in view of the new grounds of rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EVEL HONORE whose telephone number is (703)756-1179. The examiner can normally be reached Monday-Friday 8 a.m. -5:30 p.m.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela D Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
EVEL HONORE
Examiner
Art Unit 2142
/HAIMEI JIANG/ Primary Examiner, Art Unit 2142