Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy CN-202310409827.7, filed on April 18, 2023, has been electronically retrieved by USPTO.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 23 is rejected under 35 U.S.C. 101 because the claim does not fall within at least one of the four categories of patent eligible subject matter and does not meet the statutory requirements listed under 35 U.S.C. 101 step 1A.
Claim 23:
Regarding claim 23, the claim recites “a computer program product, comprising a computer program which, when executed on a processor, causes the steps of the method according to claim 1 to be performed.” Here, the claimed invention is directed to non-statutory subject matter, and claim 23 is rejected under 35 U.S.C. 101 because a computer program product is not embodied on a computer readable medium, and is directed to software per se, which is not part of one of the four categories of patent eligible subject matter. Appropriate correction is required.
Claims 1-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more.
Claim 1:
Regarding claim 1, in step 1 of the 101-analysis set forth in MPEP 2106, the claim recites
“a method for operating a predictive engine, comprising: generating two or more models with different engine structures and parameter sets; generating two or more states according to data and features; deploying the models or part of the models to the states;
selecting a top-ranked model in each state; deploying the selected models by states to a live engine; determining a probabilistic weight for each state according to live data and features;
ensembling a plurality of prediction results of the models for each state using respective probabilistic weights; and serving the ensembled prediction results as an output of the predictive engine,” and a method is one of the four statutory categories of invention.
In step 2A prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
generating two or more models …; (This is considered a mental process, a person can mentally evaluate and generate models, see MPEP 2106.04(a)(2)(III)),
generating two or more states according to data and features, (This is considered a mental process, a person can mentally evaluate and generate states, which in the specification paragraphs [0014-0017], state is considered to be a variable type either numeric or categorical, see MPEP 2106.04(a)(2)(III)),
selecting a top-ranked model in each state, (This is considered a mental process, a person can mentally evaluate and select a model that is top performing in each state, see MPEP 2106.04(a)(2)(III)),
determining a probabilistic weight for each state according to live data and features, (This is considered a mental process, since a person can mentally evaluate and use live data to determine a probabilistic weight, seen as numeric probability quantities defined in the specification paragraph [0045], see MPEP 2106.04(a)(2)(III)),
ensembling a plurality of prediction results of the models for each state using respective probabilistic weights, (This is considered a mental process, since a person can mentally evaluate and ensemble or group model results using respective probabilistic weights, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
In step 2A prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
A method for operating a predictive engine, comprising: … with different engine structures and parameter sets, (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
deploying the models or part of the models to the states; (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
deploying the selected models by states to a live engine, (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and serving the ensembled prediction results as an output of the predictive engine, (In step 2A, prong 2, serving results as an output recites mere data outputting, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)),
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional elements vi, vii, viii recite mere instructions to apply the judicial exception using generic computer components, which are not indicative of significantly more. The additional element ix recites mere data gathering or outputting, and is considered insignificant extra-solution activity. In step 2B, this insignificant extra-solution activity is well understood routine and conventional activity which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), – see MPEP 2106.05(d) (II)(i)), as well as see court case Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering, see MPEP 2106.05(g)(3))).
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim 2:
Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 recites the following abstract idea:
The method according to claim 1, wherein the method further comprises evaluating a performance of the output, the model selection, (this is considered a mental process, since a person can mentally evaluate, perform model selection, and generate a performance output, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Further, claim 2 also recites an additional element:
..and the probabilistic weights by computing one or more evaluation results based on at least one evaluation metric, (In step 2A, prong 2, computing results is considered mere instructions to apply an exception using generic computer from an Evaluator 450 machine from paragraph 0039 in the specification – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 3:
Regarding claim 3, it is dependent upon claim 2, and thereby incorporates the limitations of, and corresponding analysis applied to claim 2. Further, claim 3 recites the following additional element:
The method according to claim 2, wherein the method further comprises updating the probabilistic weights of the states through rewards or penalties according to the performance, thereby tuning the predictive engine, (In step 2A, prong 2, tuning the predictive engine and updating probabilistic weights according to performance is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 4:
Regarding claim 4, it is dependent upon claim 3, and thereby incorporates the limitations of, and corresponding analysis applied to claim 3. Further, claim 4 recites the following abstract idea:
The method according to claim 3, wherein the models are generated from data, features and/or data derived from data, (this is considered a mental process, since a person can mentally evaluate and generate models from data, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 5:
Regarding claim 5, it is dependent upon claim 3, and thereby incorporates the limitations of, and corresponding analysis applied to claim 3. Further, claim 5 recites the following abstract idea:
The method according to claim 3, wherein the top ranked models are selected by ranking one or more performance metrics and/or correlating with other models, (this is considered a mental process, since a person can mentally evaluate and rank models with one or more performance metrics, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 6:
Regarding claim 6, it is dependent upon claim 3, and thereby incorporates the limitations of, and corresponding analysis applied to claim 3. Further, claim 6 recites the following abstract idea:
The method according to claim 3, wherein the probabilistic weights are determined using probabilities of the current states according to the latest data, (This is considered a mathematical relationship, mathematical formula or equation, mathematical calculations, in paragraph [0045] of the specification, probabilistic weights are viewed as decimals part of a calculation, “for example, in an equal-weight scenario with 3 prediction results of 1, 1, 0, the ‘ensembling’ yields (1+1+0)/3 = 0.66 > 0.5, which will then predict the outcome as 1. Of course, there may be scenarios where the weights are not equal. For example, if the probabilities of the three predicted results 1, 1, and 0 are 0.7, 0.2, and 0.1 respectively, the ‘ensembling’ will yield (1x0.7 + 1x0.2 + 0x0.1) = 0.9 > 0.5, which will also predict the outcome as 1”, see MPEP 2106.04(a)(2), subsection I),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mathematical concept but for the recitation of generic computer components, then it falls within the mathematical concept grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 7:
Regarding claim 7, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 7 recites the following additional element:
The method according to claim 1, wherein the state comprises a status of a plant machine, including the number of years and months that machine components have been in operation, an outdoor temperature, and/or an outdoor humidity, and wherein the predictive engine is used, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 7 recites the following abstract idea:
to predict productivity of the plant or a probability of the machine requiring maintenance, (This is considered a mental process, a person can mentally evaluate and predict productivity of the plant or a probability of the machine requiring maintenance, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 8:
Regarding claim 8, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 8 recites the following additional element:
The method according to claim 1, wherein the state comprises a status of a computer, including applications already open on the computer, time of day, and/or working hours, and wherein the predictive engine is used, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 8 recites the following abstract idea:
to predict the purpose or task of a user using the computer, (This is considered a mental process, a person can mentally evaluate and predict the purpose or task of a user using the computer, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 9:
Regarding claim 9, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 9 recites the following additional element:
The method according to claim 1, wherein the state comprises a state of traffic, including traffic conditions on each route, the date, and/or whether it is a holiday, and wherein the predictive engine is used, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 9 recites the following abstract idea:
to predict a probability of traffic congestion, (This is considered a mental process, a person can mentally evaluate and predict a probability of traffic congestion, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 10:
Regarding claim 10, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 10 recites the following additional element:
The method according to claim 1, wherein the state comprises a spending appetite of consumers, including a type of spending and/or a level of spending, and wherein the predictive engine, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 10 recites the following abstract idea:
is used to predict a probability of the consumers shopping online, (This is considered a mental process, a person can mentally evaluate and predict a probability of the consumers shopping online, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 11:
Regarding claim 11, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 11 recites the following additional element:
The method according to claim 1, wherein the state comprises market or financial conditions, and wherein the predictive engine is used, (In step 2A, prong 2, using a predictive engine is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 11 recites the following abstract idea:
to predict asset prices or risks, or is used to predict a risk of lending to a company or to predict a stock price of the company, (this is considered a mental process, since a person can mentally evaluate and predict prices or risks (seen as quantitative amounts or numbers), see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 12:
Regarding claim 12, in step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “a system for operating a predictive engine, comprising: a processor;
a computer-readable working memory; a predictive engine stored in the working memory; and a non-volatile computer-readable storage medium for storing program codes, the stored codes being capable, when executed by the processor, of causing the following steps to be performed: generating two or more models with different engine structures and parameter sets; generating two or more states according to data and features; deploying the models or part of the models to the states; selecting a top-ranked model in each state; deploying the selected models by states to a live engine; determining a probabilistic weight for each state according to live data and features; ensembling a plurality of prediction results of the models for each state using respective probabilistic weights; and serving the ensembled prediction results as an output of the predictive engine”, and a system is one of the four statutory categories of invention.
In step 2A prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
generating two or more models …; (This is considered a mental process, a person can mentally evaluate and generate models, see MPEP 2106.04(a)(2)(III)),
generating two or more states according to data and features, (This is considered a mental process, a person can mentally evaluate and generate states, which in the specification paragraphs [0014-0017], state is considered to be a variable type either numeric or categorical, see MPEP 2106.04(a)(2)(III)),
selecting a top-ranked model in each state, (This is considered a mental process, a person can mentally evaluate and select a model that is top performing in each state, see MPEP 2106.04(a)(2)(III)),
determining a probabilistic weight for each state according to live data and features, (This is considered a mental process, since a person can mentally evaluate and use live data to determine a probabilistic weight, seen as numeric probability quantities defined in the specification paragraph [0045], see MPEP 2106.04(a)(2)(III)),
ensembling a plurality of prediction results of the models for each state using respective probabilistic weights, (This is considered a mental process, since a person can mentally evaluate and ensemble or group model results using respective probabilistic weights, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
In step 2A prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has determined that the following additional elements do not integrate this judicial exception into a practical application:
a system for operating a predictive engine, comprising a processor; (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
a computer-readable working memory; (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
a predictive engine stored in the working memory; (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and a non-volatile computer-readable storage medium for storing program codes, the stored codes being capable, when executed by the processor, of causing the following steps to be performed, (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
… with different engine structures and parameter sets, (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
deploying the models or part of the models to the states; (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
deploying the selected models by states to a live engine, (This is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
and serving the ensembled prediction results as an output of the predictive engine, (In step 2A, prong 2, serving results as an output recites mere data outputting, which is considered insignificant extra-solution activity – see MPEP 2106.05(g)),
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In step 2B of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional elements vi, vii, viii, ix, x, xi, and xii recite mere instructions to apply the judicial exception using generic computer components, which are not indicative of significantly more. The additional element xiii recites mere data gathering or outputting, and is considered insignificant extra-solution activity. In step 2B, this insignificant extra-solution activity is well understood routine and conventional activity which includes receiving or transmitting data over a network from court case Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016), – see MPEP 2106.05(d) (II)(i)), as well as see court case Mayo, 566 U.S. at 79, 101 USPQ2d at 1968; OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1092-93 (Fed. Cir. 2015) (presenting offers and gathering statistics amounted to mere data gathering, see MPEP 2106.05(g)(3))).
Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim 13:
Regarding claim 13, it is dependent upon claim 12, and thereby incorporates the limitations of, and corresponding analysis applied to claim 12. Further, claim 13 recites the following abstract idea:
The system according to claim 12, wherein the steps further comprise: evaluating a performance of the output, the model selection, (this is considered a mental process, since a person can mentally evaluate, perform model selection, and generate a performance output, see MPEP 2106.04(a)(2)(III)),
Further, claim 13 recites the following additional element:
and the probabilistic weights by computing one or more evaluation results based on at least one evaluation metric, (In step 2A, prong 2, computing results is considered mere instructions to apply an exception using generic computer from an Evaluator 450 machine from paragraph 0039 in the specification – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 14:
Regarding claim 14, it is dependent upon claim 13, and thereby incorporates the limitations of, and corresponding analysis applied to claim 13. Further, claim 14 recites the following additional element:
The system according to claim 13, wherein the steps further comprise: updating the probabilistic weights of the states through rewards or penalties according to the performance, thereby tuning the predictive engine, (In step 2A, prong 2, tuning the predictive engine and updating probabilistic weights according to performance is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 15:
Regarding claim 15, it is dependent upon claim 14, and thereby incorporates the limitations of, and corresponding analysis applied to claim 14. Further, claim 15 recites the following abstract idea:
The system according to claim 14, wherein the models are generated from data, features and/or data derived from data, (this is considered a mental process, since a person can mentally evaluate and generate models from data, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 16:
Regarding claim 16, it is dependent upon claim 14, and thereby incorporates the limitations of, and corresponding analysis applied to claim 14. Further, claim 16 recites the following abstract idea:
The system according to claim 14, wherein the top ranked models are selected by ranking one or more performance metrics and/or correlating with other models, (this is considered a mental process, since a person can mentally evaluate and rank models with one or more performance metrics, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 17:
Regarding claim 17, it is dependent upon claim 14, and thereby incorporates the limitations of, and corresponding analysis applied to claim 14. Further, claim 17 recites the following abstract idea:
The system according to claim 14, wherein the probabilistic weights are determined using probabilities of the current states according to the latest data, (This is considered a mathematical relationship, mathematical formula or equation, mathematical calculations, in paragraph [0045] of the specification, probabilistic weights are viewed as decimals part of a calculation, “for example, in an equal-weight scenario with 3 prediction results of 1, 1, 0, the ‘ensembling’ yields (1+1+0)/3 = 0.66 > 0.5, which will then predict the outcome as 1. Of course, there may be scenarios where the weights are not equal. For example, if the probabilities of the three predicted results 1, 1, and 0 are 0.7, 0.2, and 0.1 respectively, the ‘ensembling’ will yield (1x0.7 + 1x0.2 + 0x0.1) = 0.9 > 0.5, which will also predict the outcome as 1”, see MPEP 2106.04(a)(2), subsection I),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mathematical concept but for the recitation of generic computer components, then it falls within the mathematical concept grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 18:
Regarding claim 18, it is dependent upon claim 12, and thereby incorporates the limitations of, and corresponding analysis applied to claim 12. Further, claim 18 recites the following additional element:
The system according to claim 12, wherein the state comprises a status of a plant machine, including the number of years and months that machine components have been in operation, an outdoor temperature, and/or an outdoor humidity, and wherein the predictive engine is used, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 18 recites the following abstract idea:
to predict productivity of the plant or a probability of the machine requiring maintenance. (This is considered a mental process, a person can mentally evaluate and predict productivity of the plant or a probability of the machine requiring maintenance, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 19:
Regarding claim 19, it is dependent upon claim 12, and thereby incorporates the limitations of, and corresponding analysis applied to claim 12. Further, claim 19 recites the following additional element:
The system according to claim 12, wherein the state comprises a status of a computer, including applications already open on the computer, time of day, and/or working hours, and wherein the predictive engine is used, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 19 recites the following abstract idea:
to predict the purpose or task of a user using the computer, (This is considered a mental process, a person can mentally evaluate and predict the purpose or task of a user using the computer, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 20:
Regarding claim 20, it is dependent upon claim 12, and thereby incorporates the limitations of, and corresponding analysis applied to claim 12. Further, claim 20 recites the following additional element:
The system according to claim 12, wherein the state comprises a state of traffic, including traffic conditions on each route, the date, and/or whether it is a holiday, and wherein the predictive engine is used, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 20 recites the following abstract idea:
to predict a probability of traffic congestion, (This is considered a mental process, a person can mentally evaluate and predict a probability of traffic congestion, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 21:
Regarding claim 21, it is dependent upon claim 12, and thereby incorporates the limitations of, and corresponding analysis applied to claim 12. Further, claim 21 recites the following additional element:
The system according to claim 12, wherein the state comprises a spending appetite of consumers, including a type of spending and/or a level of spending, and wherein the predictive engine is used, (In step 2A, prong 2, this is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 21 recites the following abstract idea:
to predict a probability of the consumers shopping online, (This is considered a mental process, a person can mentally evaluate and predict a probability of the consumers shopping online, see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 22:
Regarding claim 22, it is dependent upon claim 12, and thereby incorporates the limitations of, and corresponding analysis applied to claim 12. Further, claim 22 recites the following additional element:
The system according to claim 12, wherein the state comprises market or financial conditions, and wherein the predictive engine is used, (In step 2A, prong 2, using a predictive engine is considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Further, claim 22 recites the following abstract idea:
to predict asset prices or risks, or is used to predict a risk of lending to a company or to predict a stock price of the company, (this is considered a mental process, since a person can mentally evaluate and predict prices or risks (seen as quantitative amounts or numbers), see MPEP 2106.04(a)(2)(III)),
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea.
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim 23:
Regarding claim 23, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 23 also recites an additional element:
a computer program product, comprising a computer program which, when executed on a processor, causes the steps of the method according to claim 1 to be performed, (In step 2A, prong 2, having a computer program and a processor is considered mere instructions to apply an exception using generic computer– see MPEP 2106.05(f)), (In step 2B, this is also considered mere instructions to apply an exception using generic computer – see MPEP 2106.05(f)),
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 12, 13, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over ACHIN J. et al. (Pub. No. JP 2019/537125-A), published on December 19, 2019, (hereafter, ACHIN) in view of TIAN L. et al. (US PG Pub No. US 2020/0012948-A1), published January 9, 2020, (hereafter, TIAN).
Claim 1:
Regarding claim 1, ACHIN teaches “a method for operating a predictive engine, comprising: generating two or more models with different engine structures and parameter sets;”
See ACHIN in paragraph [0418] describe "for the regression models tested, 39% of the second order models were not as accurate as the corresponding first order models, but only 10% were worse according to the residual mean square error measure of accuracy. Forty-seven percent of the secondary model was in fact more accurate than the primary model. In only 14% of cases, the secondary model above 10% was not as accurate as the primary model. About 10% of these cases (approximately 1.5% of the total population) occurred when the dataset was very small. In 35% of all cases, the best secondary model was derived from a mixture of primary models." Further, see ACHIN in paragraph [0008] describe “Statistical learning techniques are based on many academic traditions (eg, mathematics, statistics, physics, engineering, economics, sociology, biology, medicine, artificial intelligence, data mining, etc.) and in many commercial disciplines. Affected by application (eg, finance, insurance, retail, manufacturing, medical, etc.). As a result, there are many different predictive modeling algorithms, which may have many variants and / or adjustment parameters, and different pre-processing and post-processing steps using their own variants and / or parameters. The volume of potential predictive modeling solutions (eg, a combination of pre-processing steps, modeling algorithms, and post-processing steps) is already enormous, and is growing rapidly as researchers develop new techniques.” Here, ACHIN teaches generating two or more models with different engine structures, and with each model include parameters per field of study (i.e. parameter sets).
Further, ACHIN teaches “generating two or more states according to data and features;”
See paragraphs [0463-0464], where ACHIN describes "when working with the category prediction problem, there may be minority classes and majority classes. The minority class can be much smaller but relatively more important, as in the case of fraud detection. In some embodiments, engine 110 "downsamples" the majority class so that the number of training observations for the majority class is more similar to that for the minor class. In some cases, the modeling techniques may automatically adapt to such weights directly during model fitting. If the modeling technique does not accommodate such weights, engine 110 may make post-fit adjustments that are proportional to the amount of downsampling. This approach may sacrifice some accuracy due to much shorter execution time and lower resource consumption. Some modeling techniques may perform more efficiently than others. For example, some modeling techniques may be optimized to run on parallel computing clusters or on servers with specialty processors. The metadata for each modeling technique may indicate any such performance benefits. When engine 110 is assigning computing jobs, it may detect jobs for modeling techniques, the benefits of which are applied within currently available computing environments. Then, during each search, engine 110 may use a larger data set for these jobs." Here, ACHIN mentions the two classes similar as two or more states according to data, and teaches generating two or more states according to data and features. See ACHIN in paragraph [0020] for more information.
Further, ACHIN teaches “deploying the models or part of the models to the states;”
See ACHIN in paragraph [0038] "In some embodiments, the action of the method further comprises deploying a fitted model. In some embodiments, the time series data is the first time series data, and the step of developing the fitted model comprises applying the fitted model to the second time series data representing one or more instances of the prediction problem." Further, ACHIN in paragraph [0039] describes "In some embodiments, the fit model is deployed on one or more servers, and other fit models are also deployed on one or more servers, and the predictions to the fit model and other fit models are made." Further, see ACHIN in paragraph [0443] mention “an example will now be described. For online games, a game provider may support many different types of games, with many instances of each type of game and many users playing at each instance. In order to increase (eg, optimize) user satisfaction and revenue from games, such providers may desire to predict user behavior based on the performance of the games played by the user. Such a provider may use such predictions, provide suggestions to the player, or adjust its future gaming experience.” Here, the examiner construes the limitation to mean incorporate state information when running the model(s), where state is construed to be any value or quantity (categorical or numeric) for a data variable. Here, ACHIN describes deploying a model or part of models to the instance, where instance relates to states (i.e. deploying the models or part of the models to the states). The example ACHIN illustrates in paragraph [0443] shows instance being the state of the number of users and type of game in a game setting.
Further, ACHIN teaches “selecting a top-ranked model in each state;”
See ACHIN in paragraph [0208], describe “the modeling system 100 selects specific candidate models and blending techniques, or uses some or all of the candidate models to generate some of the blending techniques in the modeling technique library. Or give the user the option to adapt everything automatically.” Here, ACHIN describes the generation and selection of candidate models. See ACHIN in paragraph [0038], “in some embodiments, the action of the method further comprises deploying a fitted model. In some embodiments, the time series data is the first time series data, and the step of developing the fitted model comprises applying the fitted model to the second time series data representing one or more instances of the prediction problem.” ACHIN describes that instances relate to a model in each state. See paragraph [0039] in ACHIN for more details.
Later, see ACHIN in paragraphs [0255-0256] describe "as part of the model building process, predictive modeling system 100 may use cross-validation to select the best values of these tuning parameters, thereby improving tuning parameter selection and parameter, Create an audit trail of how choices affect results. The predictive modeling system 100 may adapt and evaluate different model structures, which are considered part of the present automated process, and rank the results with respect to verification set performance.
5. Select the final model. The selection of the final model can be made by the predictive modeling system 100 or by the user. In the latter case, the predictive modeling system may allow the user to, for example, evaluate the model's ranked verification set performance, compare the performance and rank by quality measures other than those used in the fitting process, and Support may be provided to assist in making this determination, including the opportunity to build an ensemble model from these component models that exhibit the best individual performance". Here, ACHIN describes selecting the best model by individual performance by ranking the results with respect to verification set performance as well as rank by quality measures other than those used in the fitting process in paragraphs 0255-0256 from the instances mentioned in paragraph 0038 (i.e. a top-ranked model in each state).
Further, ACHIN teaches “deploying the selected models by states to a live engine;”
See paragraph [0113], where ACHIN discusses “data indicative of the results of applying the predictive modeling technique to the prediction problem or data set may be provided by a search engine (e.g., based on the results of previous trials using the predictive modeling technique for the prediction problem or data set). Provided by 110, provided by a user (e.g., based on the user's expertise), and / or obtained from any other suitable source. In some embodiments, the search engine 110 is based, at least in part, on the relationship between the actual performance of the instance of the prediction problem and the performance predicted by the prediction model generated via predictive modeling techniques. And update such data.”
Further, see paragraph [0231] where ACHIN mentions “for each model, search engine 110 may store a record of the modeling techniques used to generate the model, and the state of the model after fitting, including coefficients and hyperparameter values. Since each technique is already machine-executable, these values may be sufficient for the execution engine to generate predictions for new observations. In some embodiments, model predictions may be generated by applying preprocessing and modeling steps described in modeling techniques to each instance of new input data.”
Further, see paragraph [0103], where ACHIN mentions “ accordingly, the user interface may be used by an analyst to enhance its own productivity and / or to improve the performance of the search engine 110. … the user interface 120 presents the results of the search in real time and allows the user to adjust the scope of the search in real time (e.g., to adjust the allocation of resources during evaluation of different modeling solutions). .., the user interface 120 provides tools for coordinating the efforts of multiple data analysts working on the same prediction problem and / or related prediction problems.” From paragraphs 0103, 0113, and 0231, ACHIN teaches that the search engine 110 is the live engine that works with the user interface 120 to run models that has each instance from data in real time (i.e. deploying the selected models by states to a live engine). See ACHIN in paragraph [0306] and [0415] for more details.
Further, ACHIN teaches “determining a probabilistic weight for each state according to live data and features;”
See ACHIN in paragraph [0467] describe "Similarly, certain observations may represent particularly important events for which the user wishes to assign additional weights. Thus, an additional variable inserted into the dataset may indicate the relative weight of each observation. Engine 110 may then use this weight when training the models and calculating their accuracy, and the goal is to produce more accurate predictions under higher weight conditions.” ACHIN mentions that producing more accurate predictions under higher weight conditions shows the weight probabilities assigned to each state varies according to the condition or state, which relate to determining a probabilistic weight for each state according to features.
Also, see ACHIN mention in paragraph [0415] “.. modeling techniques may use alternative or auxiliary training and / or test data. Such alternatives may include other real-world data from either the same or different data sources (eg, via interpolation and extrapolation) (eg, a wider range of possibilities than exist in real-world samples). It may include real-world data combined with machine-generated data (for the purpose of covering gender) or data completely generated by machine-based probabilistic models. In some embodiments, the value of the target variable used to train the secondary model is a predicted value from the primary model.” ACHIN notes here in paragraph [0415] that the machine-based probabilistic models correspond to selected models by states, and are incorporated to the real time information, which relates to determining a probabilistic weight for each state according to live data and features. See ACHIN describe in paragraphs [0327, 0039] for more information.
Further, ACHIN teaches “ensembling a plurality of prediction results of the models for each state using respective probabilistic weights;”
See ACHIN in paragraph [0049], describe "determining the model-independent prediction of the feature comprises calculating a statistical measurement of the diffusion of the model-specific prediction, wherein the statistical measurement of the diffusion is model-specific. Is selected from the group consisting of the range, variance, and standard deviation of the predicted values of In some embodiments, determining the model-independent prediction of the feature includes calculating a combination of model-specific predictions of the feature. In some embodiments, calculating the combination of model-specific predictions includes calculating a weighted combination of model-specific predictions. In some embodiments, calculating a weighted combination of model-specific predictions includes assigning individual weights to model-specific predictions, and wherein a particular model-specific prediction corresponding to a particular fit prediction model is provided. The weight assigned to the value increases as the first accuracy score of the fit prediction model increases." Here, ACHIN mentions that calculating the combination of model-specific predictions includes calculating a weighted combination of model-specific predictions, which relates to ensembling results for models for each feature or state using probabilistic weights.
Further, ACHIN describes in paragraph [0255] “select a model structure, generate derived features, select model tuning parameters, fit and evaluate the model. In some embodiments, the predictive modeling system 100 may include a number of different, including, but not limited to, decision trees, neural networks, support vector machine models, regression models, boost trees, random forests, deep learning neural networks, and the like. The model type can be adapted. The predictive modeling system 100 may provide an option to automatically build an ensemble from these component models..” Here, ACHIN explicitly mentions build an ensemble. See paragraphs [0057 – 0058] in ACHIN for more information.
However, ACHIN fail to teach “and serving the ensembled prediction results as an output of the predictive engine.”
In an analogous art, TIAN teaches “and serving the ensembled prediction results as an output of the predictive engine.”
See TIAN in paragraph [0068] describe "the ensemble scoring engine 120 stores all calculated results in the subset score record for the first subset S1 in block 240, as shown in the first subset score record 441." TIAN elaborates in paragraphs [0032, 0034] "the ensemble scoring engine 120 records a score calculated for the real time score request 109 in the score log 150, as being associated with a current subset of base model instances from the base model list 140 and the priority policy 135 as applied. In certain embodiments of the present invention, a separate process handling the score log 150 can be present. The score log 150 includes all intermediate scoring components and corresponding values during the process of the ensemble scoring engine 120, until producing the predicted ensemble score 195… The ensemble scoring engine 120 selects a subset of prioritized base model instances from the base mode list 140 that are enough to produce a passing score that meets the end condition 131, resulting in spending less idle time pending completion of other base models in an ensemble with longer response times. " Here, TIAN teaches that the calculated results (i.e. ensembled prediction results as an output) in the subset score record are stored in the ensemble scoring engine 120 (i.e. predictive engine), which records (and also returns) a calculated score in real time associated with a current subset of base model instances from the base model list. TIAN overall teaches and serving the ensembled prediction results as an output of the predictive engine. Further, see TIAN in paragraph [0020] and from figure 1 for more information. Note the word serving is construed by the definition in the specification in paragraph [0036] where the “"Serving" component refers to a component of a predictive engine for returning prediction results, and for adding custom business logic. If an engine has multiple algorithms, the Serving component may combine multiple prediction results into one.”
PNG
media_image1.png
626
419
media_image1.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the reference of ACHIN along with the teachings of TIAN since they both teach using a predictive engine that generates machine learning models with prediction results using probabilistic weights.
One of ordinary skill in the art would be motivated to do so because by integrating TIAN’s framework into the methods of ACHIN, one with ordinary skill in the art would achieve the goal “to improve processing efficiency in forming a subset for ensemble score prediction. By ordering the base models in the list, embodiments of the present invention can further improve efficiency in prediction by eliminating delays caused by selecting base models for a subset that cannot generate a passing score for accuracy. Certain embodiments of the present invention also employs a pass factor respective to each base model, in order to minimize the delays caused by running more than one subset of the base models.” (TIAN, paragraph [0093]).
Claim 2:
Regarding claim 2, ACHIN in view of TIAN teaches the limitations in claim 1.
Further, ACHIN teaches “the method according to claim 1, wherein the method further comprises: evaluating a performance of the output, the model selection, and the probabilistic weights by computing one or more evaluation results based on at least one evaluation metric,”
See ACHIN in paragraph [0049] describe "in some embodiments, the action of the method further comprises determining a model-independent prediction of the feature based on the model-specific prediction of the feature. In some embodiments, determining the model-independent prediction of the feature comprises calculating a statistical measure of the center and / or diffusion of the model-specific prediction of the feature.... In some embodiments, calculating the combination of model-specific predictions includes calculating a weighted combination of model-specific predictions. In some embodiments, calculating a weighted combination of model-specific predictions includes assigning individual weights to model-specific predictions, and wherein a particular model-specific prediction corresponding to a particular fit prediction model is provided. The weight assigned to the value increases as the first accuracy score of the fit prediction model increases." Note the examiner construes probabilistic weights to mean probabilities, likelihood, or statistical information that a given variable or feature occurs. Here, ACHIN describes in paragraph [0049] of a statistical measure of the model specific prediction of the feature (i.e. probabilistic weight), and that the "weight assigned to the value increases as the first accuracy score of the fit prediction model increases" shows associating probabilistic weights by calculating predictions (i.e. evaluation results) based on (i.e. accuracy) at least one evaluation metric.
See ACHIN in paragraph [0146] describes the "search engine 110 may use the model performance metadata to determine the performance value (expected or actual) of the similar modeling procedure for the similar prediction problem. These performance values can then be combined to generate an estimate of the suitability of the modeling procedure in question for the prediction problem in question. For example, the search engine 110 may calculate the suitability of the modeling procedure in question as a weighted sum of the performance values of the similar modeling procedure for a similar prediction problem." Here, ACHIN shows evaluating a performance of the output.
Further, see paragraph [0165], where ACHIN describes " selecting a prediction model for the prediction problem comprises iteratively selecting a subset of the prediction model and combining the prediction model selected on a larger or different portion of the dataset. Training. This iterative process may continue until a prediction model is selected for the prediction problem" Here, ACHIN teaches model selection. See paragraphs [0415] and [0463] for more information. Overall, ACHIN teaches the method according to claim 1, wherein the method further comprises evaluating a performance of the output, the model selection, and the probabilistic weights by computing one or more evaluation results based on at least one evaluation metric.
Claim 12:
Regarding claim 12, ACHIN further teaches “a system for operating a predictive engine, comprising: a processor;”
See paragraph [0042] where ACHIN describes “another embodiment of this aspect is a memory configured to store a machine-executable module that encodes a predictive modeling procedure, the predictive modeling procedure comprising at least one pre-processing task; A memory including a plurality of tasks, including one model fitting task, and at least one processor configured to execute the machine-executable module, the step of executing the machine-executable module comprising: And a processor for performing a predictive modeling procedure.” Here, ACHIN teaches a processor.
Further, see paragraph [0016] where ACHIN mentions “other embodiments of this aspect each include a corresponding computer system, apparatus, and computer program recorded on one or more computer storage devices configured to perform the actions of the method. A system of one or more computers is identified by having software, firmware, hardware, or a combination thereof installed on the system that, when operated, causes an action or causes the system to perform an action.” See paragraphs [0060], [0084], and [0479, 0481] for more information.
Further, ACHIN teaches “a computer-readable working memory;,”
Further, see paragraph [0477] where ACHIN describes “in this aspect, some embodiments, when run on one or more computers or other processors, perform one or more methods that implement the various embodiments discussed above. Computer readable media (or more than one computer readable media) (e.g., computer memory, one or more floppy disks, compact disks, optical disks, magnetic tape, flash memory) encoded with more than one program , A circuit configuration in a field programmable gate array or other semiconductor device, or other tangible computer storage medium)… The one or more computer-readable media may have one or more different programs such that one or more programs stored thereon implement the various aspects of predictive modeling..” Here, ACHIN mentions a computer memory that is part of computer-readable media.
Further, ACHIN teaches “a predictive engine stored in the working memory,”
See paragraph [0042] where ACHIN describes “another embodiment of this aspect is a memory configured to store a machine-executable module that encodes a predictive modeling procedure, the predictive modeling procedure comprising at least one pre-processing task; A memory including a plurality of tasks, including one model fitting task, and at least one processor configured to execute the machine-executable module, the step of executing the machine-executable module comprising: And a processor for performing a predictive modeling procedure.” Note the examiner construes working memory to mean memory that is part of a computing system. Here, ACHIN describes “a memory configured to store a machine-executable module that encodes a predictive modeling procedure,” where the machine-executable module relates to a predictive engine, and this is stored in memory (i.e. working memory).
Further, ACHIN teaches “a non-volatile computer-readable storage medium for storing program codes, the stored codes being capable, when executed by the processor, of causing the following steps to be performed,”
See paragraph [0280], where ACHIN describes “standard components associated with the client computer, including the central processing unit, volatile and non-volatile storage, input / output devices, and displays, are not shown.” Here, ACHIN teaches a non-volatile computer-readable storage medium for storing program codes.
generating two or more models with different engine structures and parameter sets;”
See ACHIN in paragraph [0418] describe "for the regression models tested, 39% of the second order models were not as accurate as the corresponding first order models, but only 10% were worse according to the residual mean square error measure of accuracy. Forty-seven percent of the secondary model was in fact more accurate than the primary model. In only 14% of cases, the secondary model above 10% was not as accurate as the primary model. About 10% of these cases (approximately 1.5% of the total population) occurred when the dataset was very small. In 35% of all cases, the best secondary model was derived from a mixture of primary models." Further, see ACHIN in paragraph [0008] describe “Statistical learning techniques are based on many academic traditions (eg, mathematics, statistics, physics, engineering, economics, sociology, biology, medicine, artificial intelligence, data mining, etc.) and in many commercial disciplines. Affected by application (eg, finance, insurance, retail, manufacturing, medical, etc.). As a result, there are many different predictive modeling algorithms, which may have many variants and / or adjustment parameters, and different pre-processing and post-processing steps using their own variants and / or parameters. The volume of potential predictive modeling solutions (eg, a combination of pre-processing steps, modeling algorithms, and post-processing steps) is already enormous, and is growing rapidly as researchers develop new techniques.” Here, ACHIN teaches generating two or more models with different engine structures, and with each model include parameters per field of study (i.e. parameter sets).
Further, ACHIN teaches “generating two or more states according to data and features;”
See paragraphs [0463-0464], where ACHIN describes "when working with the category prediction problem, there may be minority classes and majority classes. The minority class can be much smaller but relatively more important, as in the case of fraud detection. In some embodiments, engine 110 "downsamples" the majority class so that the number of training observations for the majority class is more similar to that for the minor class. In some cases, the modeling techniques may automatically adapt to such weights directly during model fitting. If the modeling technique does not accommodate such weights, engine 110 may make post-fit adjustments that are proportional to the amount of downsampling. This approach may sacrifice some accuracy due to much shorter execution time and lower resource consumption. Some modeling techniques may perform more efficiently than others. For example, some modeling techniques may be optimized to run on parallel computing clusters or on servers with specialty processors. The metadata for each modeling technique may indicate any such performance benefits. When engine 110 is assigning computing jobs, it may detect jobs for modeling techniques, the benefits of which are applied within currently available computing environments. Then, during each search, engine 110 may use a larger data set for these jobs." Here, ACHIN mentions the two classes similar as two or more states according to data, and teaches generating two or more states according to data and features. See ACHIN in paragraph [0020] for more information.
Further, ACHIN teaches “deploying the models or part of the models to the states;”
See ACHIN in paragraph [0038] "In some embodiments, the action of the method further comprises deploying a fitted model. In some embodiments, the time series data is the first time series data, and the step of developing the fitted model comprises applying the fitted model to the second time series data representing one or more instances of the prediction problem." Further, ACHIN in paragraph [0039] describes "In some embodiments, the fit model is deployed on one or more servers, and other fit models are also deployed on one or more servers, and the predictions to the fit model and other fit models are made." Further, see ACHIN in paragraph [0443] mention “an example will now be described. For online games, a game provider may support many different types of games, with many instances of each type of game and many users playing at each instance. In order to increase (eg, optimize) user satisfaction and revenue from games, such providers may desire to predict user behavior based on the performance of the games played by the user. Such a provider may use such predictions, provide suggestions to the player, or adjust its future gaming experience.” Here, the examiner construes the limitation to mean incorporate state information when running the model(s), where state is construed to be any value or quantity (categorical or numeric) for a data variable. Here, ACHIN describes deploying a model or part of models to the instance, where instance relates to states (i.e. deploying the models or part of the models to the states). The example ACHIN illustrates in paragraph [0443] shows instance being the state of the number of users and type of game in a game setting.
Further, ACHIN teaches “selecting a top-ranked model in each state;”
See ACHIN in paragraph [0208], describe “the modeling system 100 selects specific candidate models and blending techniques, or uses some or all of the candidate models to generate some of the blending techniques in the modeling technique library. Or give the user the option to adapt everything automatically.” Here, ACHIN describes the generation and selection of candidate models. See ACHIN in paragraph [0038], “in some embodiments, the action of the method further comprises deploying a fitted model. In some embodiments, the time series data is the first time series data, and the step of developing the fitted model comprises applying the fitted model to the second time series data representing one or more instances of the prediction problem.” ACHIN describes that instances relate to a model in each state. See paragraph [0039] in ACHIN for more details.
Later, see ACHIN in paragraphs [0255-0256] describe "as part of the model building process, predictive modeling system 100 may use cross-validation to select the best values of these tuning parameters, thereby improving tuning parameter selection and parameter, Create an audit trail of how choices affect results. The predictive modeling system 100 may adapt and evaluate different model structures, which are considered part of the present automated process, and rank the results with respect to verification set performance.
5. Select the final model. The selection of the final model can be made by the predictive modeling system 100 or by the user. In the latter case, the predictive modeling system may allow the user to, for example, evaluate the model's ranked verification set performance, compare the performance and rank by quality measures other than those used in the fitting process, and Support may be provided to assist in making this determination, including the opportunity to build an ensemble model from these component models that exhibit the best individual performance". Here, ACHIN describes selecting the best model by individual performance by ranking the results with respect to verification set performance as well as rank by quality measures other than those used in the fitting process in paragraphs 0255-0256 from the instances mentioned in paragraph 0038 (i.e. a top-ranked model in each state).
Further, ACHIN teaches “deploying the selected models by states to a live engine;”
See paragraph [0113], where ACHIN discusses “data indicative of the results of applying the predictive modeling technique to the prediction problem or data set may be provided by a search engine (e.g., based on the results of previous trials using the predictive modeling technique for the prediction problem or data set). Provided by 110, provided by a user (e.g., based on the user's expertise), and / or obtained from any other suitable source. In some embodiments, the search engine 110 is based, at least in part, on the relationship between the actual performance of the instance of the prediction problem and the performance predicted by the prediction model generated via predictive modeling techniques. And update such data.”
Further, see paragraph [0231] where ACHIN mentions “for each model, search engine 110 may store a record of the modeling techniques used to generate the model, and the state of the model after fitting, including coefficients and hyperparameter values. Since each technique is already machine-executable, these values may be sufficient for the execution engine to generate predictions for new observations. In some embodiments, model predictions may be generated by applying preprocessing and modeling steps described in modeling techniques to each instance of new input data.”
See paragraph [0103] where ACHIN describes, “accordingly, the user interface may be used by an analyst to enhance its own productivity and / or to improve the performance of the search engine 110. In some embodiments, the user interface 120 presents the results of the search in real time and allows the user to adjust the scope of the search in real time (e.g., to adjust the allocation of resources during evaluation of different modeling solutions). Can guide search. In some embodiments, the user interface 120 provides tools for coordinating the efforts of multiple data analysts working on the same prediction problem and / or related prediction problems.” From paragraphs 0103, 0113, and 0231, ACHIN teaches that the search engine 110 is the live engine that works with the user interface 120 to run models that has each instance from data in real time (i.e. deploying the selected models by states to a live engine). See ACHIN in paragraph [0306] and [0415] for more details.
Further, ACHIN teaches “determining a probabilistic weight for each state according to live data and features;”
See ACHIN in paragraph [0467] describe "Similarly, certain observations may represent particularly important events for which the user wishes to assign additional weights. Thus, an additional variable inserted into the dataset may indicate the relative weight of each observation. Engine 110 may then use this weight when training the models and calculating their accuracy, and the goal is to produce more accurate predictions under higher weight conditions.” ACHIN mentions that producing more accurate predictions under higher weight conditions shows the weight probabilities assigned to each state varies according to the condition or state, which relate to determining a probabilistic weight for each state according to features.
Also, see ACHIN mention in paragraph [0415] “.. modeling techniques may use alternative or auxiliary training and / or test data. Such alternatives may include other real-world data from either the same or different data sources (eg, via interpolation and extrapolation) (eg, a wider range of possibilities than exist in real-world samples). It may include real-world data combined with machine-generated data (for the purpose of covering gender) or data completely generated by machine-based probabilistic models. In some embodiments, the value of the target variable used to train the secondary model is a predicted value from the primary model.” ACHIN notes here in paragraph [0415] that the machine-based probabilistic models correspond to selected models by states, and are incorporated to the real time information, which relates to determining a probabilistic weight for each state according to live data and features. See ACHIN describe in paragraphs [0327, 0039] for more information.
Further, ACHIN teaches “ensembling a plurality of prediction results of the models for each state using respective probabilistic weights;”
See ACHIN in paragraph [0049], describe "determining the model-independent prediction of the feature comprises calculating a statistical measurement of the diffusion of the model-specific prediction, wherein the statistical measurement of the diffusion is model-specific. Is selected from the group consisting of the range, variance, and standard deviation of the predicted values of In some embodiments, determining the model-independent prediction of the feature includes calculating a combination of model-specific predictions of the feature. In some embodiments, calculating the combination of model-specific predictions includes calculating a weighted combination of model-specific predictions. In some embodiments, calculating a weighted combination of model-specific predictions includes assigning individual weights to model-specific predictions, and wherein a particular model-specific prediction corresponding to a particular fit prediction model is provided. The weight assigned to the value increases as the first accuracy score of the fit prediction model increases." Here, ACHIN mentions that calculating the combination of model-specific predictions includes calculating a weighted combination of model-specific predictions, which relates to ensembling results for models for each feature or state using probabilistic weights.
Further, ACHIN describes in paragraph [0255] “select a model structure, generate derived features, select model tuning parameters, fit and evaluate the model. In some embodiments, the predictive modeling system 100 may include a number of different, including, but not limited to, decision trees, neural networks, support vector machine models, regression models, boost trees, random forests, deep learning neural networks, and the like. The model type can be adapted. The predictive modeling system 100 may provide an option to automatically build an ensemble from these component models..” Here, ACHIN explicitly mentions build an ensemble. See paragraphs [0057 – 0058] in ACHIN for more information.
However, ACHIN fail to teach “and serving the ensembled prediction results as an output of the predictive engine.”
In an analogous art, TIAN teaches “and serving the ensembled prediction results as an output of the predictive engine.”
See TIAN in paragraph [0068] describe "the ensemble scoring engine 120 stores all calculated results in the subset score record for the first subset S1 in block 240, as shown in the first subset score record 441." TIAN elaborates in paragraphs [0032, 0034] "the ensemble scoring engine 120 records a score calculated for the real time score request 109 in the score log 150, as being associated with a current subset of base model instances from the base model list 140 and the priority policy 135 as applied. In certain embodiments of the present invention, a separate process handling the score log 150 can be present. The score log 150 includes all intermediate scoring components and corresponding values during the process of the ensemble scoring engine 120, until producing the predicted ensemble score 195… The ensemble scoring engine 120 selects a subset of prioritized base model instances from the base mode list 140 that are enough to produce a passing score that meets the end condition 131, resulting in spending less idle time pending completion of other base models in an ensemble with longer response times." Here, TIAN teaches that the calculated results (i.e. ensembled prediction results as an output) in the subset score record are stored in the ensemble scoring engine 120 (i.e. predictive engine), which records (and also returns) a calculated score in real time associated with a current subset of base model instances from the base model list. TIAN overall teaches and serving the ensembled prediction results as an output of the predictive engine. Further, see TIAN in paragraph [0020] and from figure 1 for more information. Note the word serving is construed by the definition in the specification in paragraph [0036] where the “"Serving" component refers to a component of a predictive engine for returning prediction results, and for adding custom business logic. If an engine has multiple algorithms, the Serving component may combine multiple prediction results into one.”
PNG
media_image2.png
725
488
media_image2.png
Greyscale
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the reference of ACHIN along with the teachings of TIAN since they both teach using a predictive engine that generates machine learning models with prediction results using probabilistic weights.
One of ordinary skill in the art would be motivated to do so because by integrating TIAN’s framework into the methods of ACHIN, one with ordinary skill in the art would achieve the goal “to improve processing efficiency in forming a subset for ensemble score prediction. By ordering the base models in the list, embodiments of the present invention can further improve efficiency in prediction by eliminating delays caused by selecting base models for a subset that cannot generate a passing score for accuracy. Certain embodiments of the present invention also employs a pass factor respective to each base model, in order to minimize the delays caused by running more than one subset of the base models.” (TIAN, paragraph [0093]).
Claim 13:
Regarding claim 13, ACHIN in view of TIAN teaches the limitations in claim 12.
Referring to claim 13, the claim recites similar limitations as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale.
Claim 23:
Regarding claim 23, ACHIN in view of TIAN teaches the limitations in claim 1.
Referring to claim 23, ACHIN further teaches “a computer program product, comprising a computer program which, when executed on a processor, causes the steps of the method according to claim 1 to be performed,”
See ACHIN in paragraph [0477], describe "some embodiments, when run on one or more computers or other processors, perform one or more methods that implement the various embodiments discussed above. Computer readable media (or more than one computer readable media) (e.g., computer memory, one or more floppy disks, compact disks, optical disks, magnetic tape, flash memory) encoded with more than one program ...The term “program” or “software” is used to refer to any type of computer code or computer-executable instructions that may be employed to program a computer or other processor to implement the various aspects described in this disclosure. Used herein in a general sense to refer to a set. In addition, according to one aspect of the present disclosure, one or more computer programs that, when executed, perform a predictive modeling method need not reside on a single computer or processor, but may include Is understood to be distributed in a modular fashion among several different computers or processors to implement the various aspects of the present invention." Here, ACHIN shows a computer program which, when executed on a processor, causes the steps of the method.
Claims 3, 4, 5, 14, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over ACHIN in view of TIAN, further in view of Johnson, M. et al. (US PG Pub. No. US 2021/0304003-A1), published on September 30, 2021, (hereafter, JOHNSON).
Claim 3:
Regarding claim 3, ACHIN in view of TIAN teaches the limitations in claim 2.
However, ACHIN in view of TIAN did not teach “the method according to claim 2, wherein the method further comprises: updating the probabilistic weights of the states through rewards or penalties according to the performance, thereby tuning the predictive engine,”
In an analogous field, JOHNSON teaches “the method according to claim 2, wherein the method further comprises: updating the probabilistic weights of the states through rewards or penalties according to the performance, thereby tuning the predictive engine,”
See paragraph [0026], where JOHNSON mentions "in some instances, the model is trained as a classifier and configured to, for a given input (e.g., an utterance), predict or infer a class or category for that input from a set of target classes or categories. Such a classifier is typically trained to generate a distribution of probabilities for the set of target classes, with a probability being generated by the classifier for each target class in the set and where the generated probabilities sum up to one (or 100%, if expressed as a percentage). In a classifier such as a neural network, the output layer of the neural network may use a softmax function as its activation function to produce the distribution of probability scores for the set of classes. These probabilities are also referred to as confidence scores. The class with the highest associated confidence score may be output as the answer for the input." Here, JOHNSON describes using probabilities which relate to probabilistic weights, and class relate to states.
Later, see paragraph [0083], where JOHNSON mentions "the hyperparameter tuning system provisions for validating the machine-learning model on a wide range of validation/test datasets. The hyperparameter tuning system associates specific target values to one or more metrics that are used to evaluate a performance of the machine-learning model. A hyperparameter objective function (e.g., a loss function) is constructed based on the target values for the multiple metrics. Additionally, as will be described in detail below with reference to FIG. 4, the hyperparameter tuning system employs an asymmetric loss mechanism (i.e., not meeting a target is penalized heavily as compared to the case of rewarding for meeting or surpassing the target) to assign weights to the different metrics in validating the machine-learning model."
See paragraph [0080], where JOHNSON further describes that the digital assistant has "each of the datasets used in training the machine-learning model is assigned a weight that indicates an importance of the dataset in training the machine-learning model. In other words, the weight assigned to a dataset corresponds to a level of influence the dataset has on the training of the machine-learning model." Note here that the examiner construes tuning to mean optimizing or adjusting a machine learning model to increase accuracy in predictions. The examiner also construes predictive engine to mean any tool that can run a machine learning model. Here, JOHNSON describes a hyperparameter tuning system that assigns weights that influence model performance through rewards or penalties in paragraph [0083], where the overall model includes both probabilities (mentioned in paragraph [0026]) and weights used for optimizing the model.
See paragraph [0138] where JOHNSON describes "data repositories 814 , 816 may be of different types. In certain examples, a data repository used by server 812 may be a database, for example, a relational database, such as databases provided by Oracle Corporation® and other vendors. One or more of these databases may be adapted to enable storage, update, and retrieval of data to and from the database in response to SQL-formatted commands." Here, JOHNSON shows how data is updated in real-time, to be used for processing later. Overall, JOHNSON teaches updating the probabilistic weights of the states through rewards or penalties according to the performance, thereby tuning the predictive engine. See paragraphs [0031], [0114] and [0125] from JOHNSON for more information.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of ACHIN and TIAN along with the teachings of JOHNSON by using the teachings of ACHIN and TIAN using a predictive engine that generates machine learning models with prediction results using probabilistic weights, with JOHNSON’s teaching of updating the probabilistic weights of the states and tuning model performance.
One of ordinary skill in the art would be motivated to do so because by integrating JOHNSON’s framework into the methods of ACHIN and TIAN, one with ordinary skill in the art would achieve the goal of providing “entities help describe an intent more fully and enable the skill bot to complete a user request,” (JOHNSON, [0060]), and “improve the performance of the chatbot and user experience” (JOHNSON, [0025]).
Claim 4:
Regarding claim 4, ACHIN in view of TIAN, and further in view of JOHNSON teaches the limitations of claim 3.
Referring to claim 4, ACHIN further teaches “the method according to claim 3, wherein the models are generated from data, features and/or data derived from data,”
See ACHIN in paragraph [0273] describe "predictive modeling system 100 automates and efficiently implements data pre-processing (e.g., anomaly detection), data partitioning, multi-feature generation, model fitting, and model evaluation, the time required to develop a model may be much shorter than the time in a conventional development cycle. Further, in some embodiments, the resulting model may be more accurate and more useful because the predictive modeling system automatically includes a data pre-processing procedure that handles both known data anomalies, such as missing data and outliers,"
Further, ACHIN in paragraph [0293] describes "(6) External System 660. Like any other Internet application, the use of APIs may allow external systems to integrate with predictive modeling system 100 at any 100 layer of architecture 600. For example, a business dashboard application can access graphical visualizations and modeling results through the interface services layer 620. External data warehouses or even live business applications can provide modeled datasets to the analytics service layer 630 through the data integration platform." ACHIN shows that models are created from or derived from data. Overall, ACHIN teaches the models are generated from data, features and/or data derived from data.
Claim 5:
Regarding claim 5, ACHIN in view of TIAN, and further in view of JOHNSON teaches the limitations of claim 3.
Referring to claim 5, ACHIN further teaches “the method according to claim 3, wherein the top ranked models are selected by ranking one or more performance metrics and/or correlating with other models,”
See ACHIN describe in paragraph [0112] "the metadata of a template includes data indicating the (actual or expected) results of applying the predictive modeling technique represented by the template to one or more prediction problems and / or data sets. Predictive modeling techniques, The result of applying to the problem or dataset may include, without limitation, an accuracy with which the predictive model generated by the predictive modeling technique predicts the target of the prediction problem or dataset, a rank of the accuracy of the predictive model generated by the predictive modeling technique (relative to other predictive modeling techniques) for the prediction problem or dataset, a score representing the utility of using the predictive modeling technique to generate a predictive model for the prediction problem or dataset (e.g., a value generated by the predictive model for an objective function), etc." Here, ACHIN teaches the selection and ranking of models by accuracy (which corresponds to top ranked models are selected by ranking one or more performance metrics and /or correlating with other models).
Claim 14:
Regarding claim 14, ACHIN in view of TIAN teaches the limitations of claim 13.
Referring to claim 14, the claim recites similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale.
Claims 15-16:
Regarding claims 15-16, ACHIN in view of TIAN, further in view of JOHNSON teaches the limitations of claim 14.
Further, claims 15 and 16 comprise of similar additional limitations as claims 4 and 5, respectively, and are rejected under the same teachings and rationale.
Claims 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over ACHIN in view of TIAN, further in view of JOHNSON, and further in view of Cella C. et al. (Pub No. US 2021 0287459-A1), published September 16, 2021, (hereafter, CELLA21).
Claim 6:
Regarding claim 6, ACHIN in view of TIAN, further in view of JOHNSON teaches the limitations in claim 5.
However, referring to claim 6, ACHIN in view of TIAN, and further in view of JOHNSON, did not teach “the method according to claim 3, wherein the probabilistic weights are determined using probabilities of the current states according to the latest data,”
In an analogous art, CELLA21 teaches “the method according to claim 3, wherein the probabilistic weights are determined using probabilities of the current states according to the latest data,”
See CELLA21 in paragraph [0056], describe "a method for updating one or more probability of downtime values of one or more transportation system digital twins is disclosed. The method includes receiving a request to update one or more probability of downtime values of one or more transportation system digital twins; ... selecting data sources from a set of available data sources for one or more inputs for the one or more dynamic models; retrieving data from the selected data sources; running the one or more dynamic models using the retrieved data as the one or more inputs to calculate one or more output values that represent the one or more probability of downtime values; and updating the one or more probability of downtime values for the one or more transportation system digital twins based on the one or more output values of the one or more dynamic models." Here, CELLA21 describes values that represent the one or more probability of downtime values show where the probabilistic weights are determined using probabilities of the current states.
Further, see CELLA21 in paragraph [0765] describe "in embodiments, the artificial intelligence system 60112 may output scores for each possible prediction, where each prediction corresponds to a possible outcome...The artificial intelligence system 60112 may then select the outcome with the greater score as the prediction." Here, CELLA21 further describes using prediction that corresponds to a possible outcome (i.e. probabilities of the current states).
Also, see CELLA21 in paragraph [0768] describe "FIG. 69 is an example embodiment depicting the deployment of the digital twin 60136 to perform predictive maintenance on the vehicle 60104. Digital twin 60136 receives data from the database 60118 on a real-time or near real-time basis. The database 60118 may store different types of data in different datastores. For example, the vehicle datastore 61102 may store data related to vehicle identification and attributes, vehicle state and event data, data from maintenance records, historical operating data, notes from vehicle service engineer, etc. The sensor datastore 61104 may store sensor data from operations including data from temperature, pressure, and vibration sensors that may be stored as signal or time-series data. "
Further, see CELLA21 in paragraph [0792] describe "the artificial intelligence system 65248 may also define the digital twin system 65330 to create a digital replica of one or more of the transportation entities. The digital replica of the one or more of the transportation entities may use substantially real-time sensor data to provide for substantially real-time virtual representation of the transportation entity and provides for simulation of one or more possible future states of the one or more transportation entities." Here, CELLA21 mentions the data used is according to the latest data, and this data is part of the system that collects and analyze data in real-time. Overall, CELLA21 teaches wherein the probabilistic weights are determined using probabilities of the current states according to the latest data. See paragraphs [0545, 0549, 0729] from CELLA21 for more information.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of ACHIN, TIAN, and JOHNSON along with the teachings of CELLA21 by using the teachings of ACHIN, TIAN, and JOHNSON of using a predictive engine that generates machine learning models with prediction results using probabilistic weights, with CELLA21’s teaching of the probabilistic weights are determined using probabilities of the current states according to the latest data.
One of ordinary skill in the art would be motivated to do so because by integrating CELLA21’s framework into the methods of ACHIN, TIAN, and JOHNSON, one with ordinary skill in the art would achieve the goal of providing “a transportation system 5111 … having an artificial intelligence system 5136 that automatically randomizes a parameter of an in-vehicle experience in order to improve a user state that benefits from variation. … may be configured to automatically undertake actions based on an objective or feedback function, such as where an artificial intelligence system 5136 is trained on outcomes from a training data set to provide outputs to one or more vehicle systems to improve health, satisfaction, mood, safety, one or more financial metrics, efficiency, or the like..” (CELLA21, paragraph [0660]).
Claim 17:
Regarding claim 17, the claim recites similar limitations as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale.
Claims 7, 10– 11, 18, and 21 – 22 are rejected under 35 U.S.C. 103 as being unpatentable over ACHIN in view of TIAN, further in view of Cella C. et al. (Pub No. WO 2022221719-A2), published on October 20, 2022, (hereafter, CELLA22).
Claim 7:
Regarding claim 7, ACHIN in view of TIAN, teaches the limitations of claim 1.
However, ACHIN in view of TIAN did not teach “the method according to claim 1, wherein the state comprises a status of a plant machine, including the number of years and months that machine components have been in operation, an outdoor temperature, and/or an outdoor humidity, and wherein the predictive engine is used to predict productivity of the plant or a probability of the machine requiring maintenance,”
In an analogous art, CELLA22 teaches “the method according to claim 1, wherein the state comprises a status of a plant machine, including the number of years and months that machine components have been in operation, an outdoor temperature, and/or an outdoor humidity, and wherein the predictive engine is used to predict productivity of the plant or a probability of the machine requiring maintenance,”
See CELLA22 in paragraph [0613] describe "an example artificial intelligence system 1160 trains a machine predictive maintenance model. A predictive maintenance model may be a model that receives machine related data and outputs one or more predictions or answers regarding the remaining life of the machine. The training data can be gathered from multiple sources including machine specifications, environmental data, sensor data, run information, outcome data and notes maintained by machine operators. The artificial intelligence system 1160 takes in the raw data, pre-processes it and applies machine learning algorithms to generate the predictive maintenance model" Here, CELLA22 mentions the life of the machine, shows predicting maintenance conditions or states of the machine, or a predictive engine is used to predict productivity of the plant or a probability of the machine requiring maintenance.
Further, see CELLA22 in paragraph [1893] mention "the control interface module 12130 may include networking modules 12131, sensor modules 12132, computing modules 12133, security modules 12134, AI modules 12135, communications modules 12136 and user interface modules 12138. In embodiments, the control interface module 12130 receives one or more sensor modules 12132. The sensor modules that are used to configure the MPR 12100 may depend on the tasks and jobs that the MPR 12100 is being configured to perform. For instance, the sensor modules 12132 may include weight sensors, environment sensors (e.g., temperature, humidity, ambient light, motion sensors, vision sensors (e.g., cameras, lidar sensors, radar sensors, etc.), or other suitable sensors. In embodiments, the sensor modules 12130 may be specialized chips, such as a lab-on-a-chip package, an organ-on-chip package, or the like." Here, CELLA22 shows states may include temperature or humidity, and this is used as the status of a plant machine.
See paragraph [0181], where CELLA22 describes “a wide range of data types may be stored in the storage layer 624 using various storage media and data storage types, data architectures 1002, and formats, including, without limitation: asset and facility data 1030, state data 1140 (such as indicating a state, condition status, or other indicator with respect to any of the value chain network entities 652, any of the applications 630 or components or workflows thereof, or any of the components or elements of the platform 604, among others), worker data 1032 (including identity data, role data, task data, workflow data, health data, attention data, mood data, stress data, physiological data, performance data, quality data and many other types); event data 1034 ((such as with respect to any of a wide range of events, including operational data, transactional data, workflow data, maintenance data, and many other types of data that includes or relates to events that occur within a value chain network 668. ” Here, CELLA22 mentions storing state or status information from state data 1140 along with maintenance data from event data 1034. See CELLA22 in paragraph [0003] for more information.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of ACHIN and TIAN along with the teachings of CELLA22 by using the teachings of ACHIN and TIAN in using a predictive engine that generates machine learning models with prediction results using probabilistic weights, with CELLA22’s teaching of a status of a plant machine incorporated into the predictive engine to predict productivity of the plant or a probability of the machine requiring maintenance.
One of ordinary skill in the art would be motivated to do so because by integrating CELLA22’s framework into the methods of ACHIN and TIAN, one with ordinary skill in the art would achieve the goal of providing “allow enterprises not only to obtain data, but to convert the data into insights and to translate the insights into well-informed decisions and timely execution of efficient operations,” (CELLA22, paragraph [0006]), and “the adaptive intelligent systems layer 614 may include a set of systems, components, services and other capabilities that collectively facilitate the coordinated development and deployment of intelligent systems, such as ones that can enhance one or more of the applications 630 at the application platform 604; ones that can improve the performance of one or more of the components, or the overall performance (e.g., speed/latency, reliability, quality of service, cost reduction, or other factors) of the connectivity facilities 642; ones that can improve other capabilities within the adaptive intelligent systems layer 614…ones that improve the performance (e.g., speed/latency, energy utilization, storage capacity, storage efficiency, reliability, security, or the like) of one or more of the components, or the overall performance, of the value chain network-oriented data storage systems 624; ... or ones that generally improve any of the process and application outputs and outcomes 1040 pursued by use of the platform 604,” (CELLA22, paragraph [0189]).
Claim 10:
Regarding claim 10, ACHIN in view of TIAN, teaches the limitations of claim 1.
However, ACHIN in view of TIAN did not teach “the method according to claim 1, wherein the state comprises a spending appetite of consumers, including a type of spending and/or a level of spending, and wherein the predictive engine is used to predict a probability of the consumers shopping online,”
In an analogous art, CELLA22 teaches “the method according to claim 1, wherein the state comprises a spending appetite of consumers, including a type of spending and/or a level of spending, and wherein the predictive engine is used to predict a probability of the consumers shopping online,”
See CELLA22 in paragraph [0151] "over time, companies have increasingly used technology solutions to improve outcomes related to a traditional supply chain like the one depicted in FIG. 1, such as software systems for predicting and managing customer demand, RFID and asset tracking systems for tracking goods as they move through the supply chain, navigation and routing systems to improve the efficiency of route selection, and the like. However, some large trends have placed manufacturers, retailers and other businesses under increasing pressure to improve supply chain performance. First, online and ecommerce operators, in particular Amazon™ have become the largest retail channels for many categories of goods and have introduced distribution and fulfillment centers 112 throughout some geographies like the United States that house hundreds of thousands, and sometimes more, product categories (SKUs), so that customers can receive items the day after they are ordered, and in some cases on the same day (and in some cases delivered to the door by a drone, robot, and/or autonomous vehicle. For retailers that do not have extensive geographic distribution of fulfillment centers or warehouses, customer expectations for speed of delivery place increased pressure on supply chain efficiency and optimization. Accordingly, a need still exists for improved supply chain methods and systems."
Further, see CELLA22 in paragraph [0214] describe “in embodiments, the adaptive intelligence systems 614 may provide a set of artificial intelligence capabilities to facilitate providing the set of predictions for the coordinated set of demand management applications and supply chain applications. In one non-limiting example, the set of artificial intelligence capabilities may include a probabilistic neural network that may be used to predict a fault condition or a problem state of a demand management application such as a lack of sufficient validated feedback. The probabilistic neural network may be used to predict a problem state with a machine performing a value chain operation (e.g., a production machine, an automated handling machine, a packaging machine, a shipping machine and the like).”
Further, see CELLA22 in paragraph [2426] describe “In addition to driving other user systems, many of the additional benefits may influence design decisions. Demand forecasting may be improved with predictive analytics to understand and predict customer demand to optimize supply decisions by corporate supply chain and business management. Predictive procurement may forecast the future price trends, price fluctuations, future risks to manage and the potentials required with the aid of a proper analysis based on previous procurement data. Real time/up to date product management may improve customer engagement and extend product lifecycles.”
Further, see CELLA22 in paragraph [0191] describes "in embodiments, the value chain monitoring systems layer 614 and its data collection systems 640 may include a wide range of systems for the collection of data. ...behavioral monitoring systems 1538 (such as for monitoring movements, shopping behavior, buying behavior, clicking behavior, behavior indicating fraud or deception, user interface interactions, product return behavior, behavior indicative of interest, attention, boredom or the like, mood-indicating behavior (such as fidgeting, staying still, moving closer, or changing posture) and many others); and any of a wide variety of Internet of Things (IoT) data collectors 1172." Here, CELLA22 in [0191] teaches shopping behavior (i.e. state comprises a state of spending appetite of consumers), and CELLA22 in [0151] mentions this method applies to online shopping stores, and in [0214] teaches a probabilistic neural network for organizing supply chain data, (i.e. predictive engine is used to predict a probability of the consumers shopping online.) See paragraphs [1037, 1455] for more information. Overall, CELLA22 teaches a method, wherein the state comprises a spending appetite of consumers, including a type of spending and/or a level of spending, and wherein the predictive engine is used to predict a probability of the consumers shopping online.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of ACHIN and TIAN along with the teachings of CELLA22 by using the teachings of ACHIN and TIAN in using a predictive engine that generates machine learning models with prediction results using probabilistic weights, with CELLA22’s teaching of a state measuring consumer spending variables, incorporated into a predictive engine that is used to predict a probability of the consumers shopping online.
One of ordinary skill in the art would be motivated to do so because by integrating CELLA22’s framework into the methods of ACHIN and TIAN, one with ordinary skill in the art would achieve the goal of providing “allow enterprises not only to obtain data, but to convert the data into insights and to translate the insights into well-informed decisions and timely execution of efficient operations,” (CELLA22, paragraph [0006]), and “the adaptive intelligent systems layer 614 may include a set of systems, components, services and other capabilities that collectively facilitate the coordinated development and deployment of intelligent systems, such as ones that can enhance one or more of the applications 630 at the application platform 604; ones that can improve the performance of one or more of the components, or the overall performance (e.g., speed/latency, reliability, quality of service, cost reduction, or other factors) of the connectivity facilities 642; ones that can improve other capabilities within the adaptive intelligent systems layer 614…ones that improve the performance (e.g., speed/latency, energy utilization, storage capacity, storage efficiency, reliability, security, or the like) of one or more of the components, or the overall performance, of the value chain network-oriented data storage systems 624; ... or ones that generally improve any of the process and application outputs and outcomes 1040 pursued by use of the platform 604,” (CELLA22, paragraph [0189]).
Claim 11:
Regarding claim 11, ACHIN in view of TIAN, teaches the limitations of claim 1.
However, ACHIN in view of TIAN, did not teach “The method according to claim 1, wherein the state comprises market or financial conditions, and wherein the predictive engine is used to predict asset prices or risks, or is used to predict a risk of lending to a company or to predict a stock price of the company,”
In an analogous system, CELLA22 did teach “The method according to claim 1, wherein the state comprises market or financial conditions, and wherein the predictive engine is used to predict asset prices or risks, or is used to predict a risk of lending to a company or to predict a stock price of the company,”
See CELLA22 in paragraphs [2601-2602] describe "in embodiments, the quantum transfer pricing system consolidates all financial data related to transfer pricing on an ongoing basis throughout the year for all entities of an organization wherein the consolidation involves applying quantum entanglement to overlay data into a single quantum state. In embodiments, the financial data may include profit data, loss data, data from intercompany invoices (potentially including quantities and prices), and the like. In embodiments, the quantum transfer pricing system may interface with a reporting system that reports segmented profit and loss, transaction matrices, tax optimization results, and the like based on superposition data. In embodiments, the quantum transfer pricing system automatically generates forecast calculations and assesses the expected local profits for any set of quantum states." Here, CELLA22 teaches the pricing system (i.e. predictive engine) forecasts calculations to evaluate expected local profits (i.e. predict asset prices) to analyze and track market or financial conditions.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of ACHIN and TIAN along with the teachings of CELLA22 by using the teachings of ACHIN and TIAN in using a predictive engine that generates machine learning models with prediction results using probabilistic weights, with CELLA22’s teaching of a state comprises market or financial conditions incorporated into a predictive engine that is used to predict asset prices or risks, or a risk of lending to a company, or a stock price of the company.
One of ordinary skill in the art would be motivated to do so because by integrating CELLA22’s framework into the methods of ACHIN and TIAN, one with ordinary skill in the art would achieve the goal of providing “allow enterprises not only to obtain data, but to convert the data into insights and to translate the insights into well-informed decisions and timely execution of efficient operations” (CELLA22, paragraph [0006]), and “the adaptive intelligent systems layer 614 may include a set of systems, components, services and other capabilities that collectively facilitate the coordinated development and deployment of intelligent systems, such as ones that can enhance one or more of the applications 630 at the application platform 604; ones that can improve the performance of one or more of the components, or the overall performance (e.g., speed/latency, reliability, quality of service, cost reduction, or other factors) of the connectivity facilities 642; ones that can improve other capabilities within the adaptive intelligent systems layer 614…ones that improve the performance (e.g., speed/latency, energy utilization, storage capacity, storage efficiency, reliability, security, or the like) of one or more of the components, or the overall performance, of the value chain network-oriented data storage systems 624; ... or ones that generally improve any of the process and application outputs and outcomes 1040 pursued by use of the platform 604,” (CELLA22, paragraph [0189]).
Claim 18:
Regarding claim 18, ACHIN in view of TIAN, teaches the limitations of claim 1.
Referring to claim 18, the claim teaches similar limitations as corresponding claim 7, and is rejected for similar rationale and reasonings as claim 7.
Claims 21-22:
Regarding claims 21 and 22, ACHIN in view of TIAN teaches the limitations of claim 12,
Further, claims 21 and 22 comprise of similar additional limitations as claims 10 and 11, respectively, and are rejected under the same teachings and rationale.
Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over ACHIN in view of TIAN, and further in view of Horvitz E. et al. (US PG Pub. No. US 7233933-B2) published on June 19, 2007, (hereafter, HORVITZ).
Claim 8:
Regarding claim 8, ACHIN in view of TIAN, teaches the limitations of claim 1.
However, ACHIN in view of TIAN did not teach “the method according to claim 1, wherein the state comprises a status of a computer, including applications already open on the computer, time of day, and/or working hours, and wherein the predictive engine is used to predict the purpose or task of a user using the computer,”
In an analogous art, HORVITZ teaches “the method according to claim 1, wherein the state comprises a status of a computer, including applications already open on the computer, time of day, and/or working hours, and wherein the predictive engine is used to predict the purpose or task of a user using the computer,”
See column 10, lines 27-47, where HORVITZ describes, "the Coordinate system 200 logs periods of presence and absence in the event log 224. Events are typically annotated by the source devices 210, whereby devices are defined by respective capabilities and locations. For example, a user can specify that certain devices have full-video conferencing abilities. The tagging of events by specific devices, indexed by capabilities allows the system 200 to forecast a probability distribution over the time until the user will have access to different kinds of devices without making a special plan. When these devices are assigned to fixed locations, such forecasts can be used to forecast a user's location. Coordinate's event system can monitor histories of a user's interaction with computing systems, including applications that are running on a system, applications that are now in focus or that have just gone out of focus. As an example, the system can identify when a user is checking email or reviewing a notification. Thus, moving beyond presence and absence, Coordinate 200 supports such forecasts as the time until a user will likely review email (or other communication), given how much time has passed since he or she last reviewed email.” Here, HORVITZ shows that the "Coordinate's event system can monitor histories of a user's interaction with computing systems, including applications that are running on a system," from column 10, lines 27-47 shows tracking or predicting the purpose or task of a user using the computer.
Further, see column 9, lines 12-16 and lines 32-34, where HORVITZ describes "the Coordinate system 200 is generally composed of four core components, however more or less than four components may be employed. A data-acquisition component 210 (or components) executes on multiple computers, components, or devices that a user is likely to employ.. .. In general, multiple dimensions of a user's activities across multiple devices, and appointment status, as encoded in a calendar, are stored in a relational database." Here, HORVITZ shows tracking data of a user's activity on various computers or devices, which shows computer usage data.
Further, see column 26, lines 39-53, where HORVITZ describes "the contactee preference data 2854 can include data concerning, but not limited to, preferences concerning the time of day for communicating (e.g., early morning, business hours, evening, late night, sleeping hours), the time of the week for communicating (e.g., Monday through Friday, Weekend, Holiday, Vacation), identity of contactors (e.g., employer, employees, critical colleague, colleague, peers, nuclear family, extended family, close friends, friends, acquaintances, others), hardware currently available or available within a time horizon of a communication attempt (e.g., desktop, laptop, home computer), preferred software (e.g., email, word processing, calendaring ) and preferred interruptability (e.g., do not interrupt while focused on work, only interrupt while not focused), for example." Here, HORVITZ shows other states including computer status such as time of day, or working hours, etc. Overall, HORVITZ teaches the state comprises a status of a computer, including applications already open on the computer, time of day, and/or working hours, and wherein the predictive engine is used to predict the purpose or task of a user using the computer.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of ACHIN and TIAN along with the teachings of HORVITZ by using the teachings of ACHIN and TIAN in using a predictive engine that generates machine learning models with prediction results using probabilistic weights, with HORVITZ’s teaching of a status of a computer, including applications already open on the computer, time of day, or working hours, incorporated into a predictive engine that is used to predict the purpose or task of a user using the computer.
One of ordinary skill in the art would be motivated to do so because by integrating HORVITZ’s framework into the methods of ACHIN and TIAN, one with ordinary skill in the art would achieve the goal of providing “predictions received by such persons or applications can then be employed to facilitate more efficient and timely communications between parties since parties or systems attempting to communicate can be given forecasts or clues to possible periods or devices in which to reach the user based upon trained observances of past user activities” (HORVITZ, col. 2, lines 57-63).
Claim 19:
Regarding claim 19, ACHIN in view of TIAN, teaches the limitations of claim 12.
Referring to claim 19, the claim teaches similar limitations as corresponding claim 8, and is rejected for similar rationale and reasonings as claim 8.
Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over ACHIN in view of TIAN, further in view of CELLA21.
Claim 9:
Regarding claim 9, ACHIN in view of TIAN, teaches the limitations of claim 1.
However, ACHIN in view of TIAN, did not teach “the method according to claim 1, wherein the state comprises a state of traffic, including traffic conditions on each route, the date, and/or whether it is a holiday, and wherein the predictive engine is used to predict a probability of traffic congestion,”
In an analogous art, CELLA21 teaches “the method according to claim 1, wherein the state comprises a state of traffic, including traffic conditions on each route, the date, and/or whether it is a holiday, and wherein the predictive engine is used to predict a probability of traffic congestion,”
See CELLA21 in paragraph [0385] mentions " FIG. 4 illustrates a range of parameters 430 that may be taken as inputs by an expert system or AI system 136 (FIG. 1), or component thereof, as described throughout this disclosure, or that may be provided as outputs from such a system and/or one or more sensors 125 (FIG. 1), cameras 127 (FIG. 1), or external systems. Parameters 430 may include one or more goals 431 or objectives (such as ones that are to be optimized by an expert system/AI system, such as by iteration and/or machine learning), such as a performance goal 433, such as relating to fuel efficiency, trip time, satisfaction, financial efficiency, safety, or the like. Parameters 430 may include market feedback parameters 435, such as relating to pricing, availability, location, or the like of goods, services, fuel, electricity, advertising, content, or the like. Parameters 430 may include rider state parameters 437, such as parameters relating to comfort 439, emotional state, satisfaction, goals, type of trip, fatigue and the like. Parameters 430 may include parameters of various transportation-relevant profiles, such as traffic profiles 440 (location, direction, density and patterns in time, among many others), road profiles 441 (elevation, curvature, direction, road surface conditions and many others), user profiles, and many others. Parameters 430 may include routing parameters 442, such as current vehicle locations, destinations, waypoints, points of interest, type of trip, goal for trip, required arrival time, desired user experience, and many others," Here, CELLA21 describes the traffic-related profile parameters, such as location, direction, vehicle locations, destinations, and others as the states of traffic.
Further, see CELLA21 in paragraph [0470] describe "the hybrid neural network 2247 is trained for at least one of predicting and optimizing based a commerce-related event at a location in the social media data 22114. … In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes traffic conditions. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes weather conditions. In embodiments, the social media data analyzed to predict a localized effect on a transportation system includes entertainment options". Here, CELLA21 describes the prediction is used on data to predict localized impacts on the transportation system which includes traffic conditions, relating to predictive engine is used to predict a probability of traffic congestion, see paragraph [0537] from CELLA21 for more info.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the references of ACHIN and TIAN along with the teachings of CELLA21 by using the teachings of ACHIN and TIAN in using a predictive engine that generates machine learning models with prediction results using probabilistic weights, with CELLA21’s teaching of the state comprises a state of traffic, including traffic conditions, and wherein the predictive engine is used to predict a probability of traffic congestion.
One of ordinary skill in the art would be motivated to do so because by integrating CELLA21’s framework into the methods of ACHIN and TIAN, one with ordinary skill in the art would achieve the goal of providing a model “configured to automatically undertake actions based on an objective or feedback function, such as where an artificial intelligence system 5136 is trained on outcomes from a training data set to provide outputs to one or more vehicle systems to improve health, satisfaction, mood, safety, one or more financial metrics, efficiency, or the like” (CELLA21, paragraph [0660]).
Claim 20:
Regarding claim 20, ACHIN in view of TIAN, teaches the limitations of claim 12.
Referring to claim 20, the claim teaches similar limitations as corresponding claim 9, and is rejected for similar rationale and reasonings as claim 9.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WENWEI ZENG whose telephone number is (571)272-7111. The examiner can normally be reached Monday-Friday, 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WenWei Zeng/Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146