Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Regarding the applicant’s traversal of the 35 U.S.C. 101 rejections of the previous office action, the applicant’s arguments filed October 28th 2025 have been fully considered, and are unpersuasive.
The applicant asserts that claim 1 recites features that integrate the abstract ideas into a practical application, claiming that the invention improves the functioning of a computer or another technology or field, specifically including improved computing efficiency and resource utilization by enabling centralized monitoring for changes in machine learning model performance when generating time series forecasts, to detect degradation or other performance changes in machine learning models that generate time series forecasts which might go otherwise unnoticed, further citing [0014-0015] of the specification as evidence.
The examiner would respectfully like to draw the applicant’s attention back to the previously cited section of the MPEP (MPEP 2106.04(d)):
"[t]his evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application."
This citation basically states that once the abstract idea is identified, only the additional elements are analyzed, alone and in combination, when determining if the claim as a whole integrates the exception into a practical application. This means that, in the case of claim 1, only the structure of the system, the receiving of data, the presence of a machine learning model, and the providing of machine learning metrics via an interface can be relied upon to integrate the abstract ideas (all of the other limitations) into a practical application, as currently written. The cited improvements being argued seem to be relying on the abstract ideas themselves, such as the monitoring of the system, which as cited above in the MPEP, is invalid rationale.
Therefore, the 35 U.S.C. 101 rejections for independent claims 1, 5, & 14 are all maintained. Further, dependent claims 2-4, 6-13, & 15-20 are all dependent upon 1 of these claims and are thus, also rejected under the same rationale in addition to their own merits as cited in the previous office action and below.
Regarding the applicant’s traversal of the 35 U.S.C. 102/103 rejections of the previous office action, the applicant’s arguments filed October 28th 2025 have been fully considered, and are unpersuasive.
Applicant asserts that both DASGUPTA and BUGDAYCI, as cited previously, fail to teach “determine that the data is associated with a previously generated time series forecast by a machine learning model” and “generate one or more performance metrics for the machine learning model according to a comparison of the data with the previously generated time series forecast”, specifically asserting that a determination that that the data is associated with a previously generated time series forecast by a machine learning model is not being made since “concurrently” is defined in DASGUPTA to refer to actions taken in parallel that do not need to necessarily begin or end at the same time, thereby meaning that even if “calculations of a plurality of forecasting data” do not begin or end at the same time, for at least some portion of time, they are performed in parallel, meaning that they are not “previously generated”.
The examiner respectfully asserts that even if, for at least some portion of time, they are performed in parallel, if they do not need to necessarily begin or end at the same time, the one that ends sooner would qualify under the broadest reasonable interpretation as “previously generated” and since the method does not continue until all are finished, this determination is involved.
Further, the applicant asserts that the cited portion of DASGUPTA (Page 10, lines 8-14), where stream metrics 242 for of forecast data 206 are normalized and compared to stream metrics 242 of forecast data 206n, merely recites the comparison of stream metrics as opposed to a comparison of the data with the previously generated time series forecast where the data is received to generate a new time series forecast.
The examiner respectfully submits that, as further cited in page 8, Lines 11-14 of DASGUPTA:
“Diagnostics framework 200 may comprise stream node data input 202 (2021, 2022, 202n), which take data of one or more of stream nodes ll0, stream forecasting model 204, and stream forecast data 206 (2061, 2062, 206n).”
In other words, 206 and 206n, are two separate streams of forecast data, received concurrently as defined above, where one may finish before the other and thus qualify as “previously generated”. The stream metrics associated with each are based upon the forecast data itself, and thus, their comparison is equivalent to the limitation as claimed.
Therefore, the rejections for independent claims 1, 5, and 14 under 35 U.S.C. 102 & 35 U.S.C. 103 are maintained. Further, dependent claims 2-4, 6-13, and 15-20 are dependent upon one of these claims and are therefore rejected under the same rationale, in addition to their own merits as listed in the previous office action and below.
Claim Rejections - 35 USC § 101
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental process) without significantly more.
Regarding claim 1, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A system, comprising: at least one processor; and a memory, storing program instructions”. The system, as described, is within one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“generate a new time series forecast” (A person can mentally evaluate data of a time series and make a judgement to predict later data (MPEP 2106).)
“determine that the data is associated with a previously generated time series forecast” (A person can mentally evaluate data and make a judgement to determine that it is associated with a previously generated time series forecast (MPEP 2106).)
“determine that model monitoring is enabled for the machine learning model” (A person can mentally evaluate a machine learning model and make a judgement to determine that model monitoring is enabled for it (MPEP 2106).)
“responsive to the determination that model monitoring is enabled for the machine learning model: generate one or more performance metrics for the machine learning model according to a comparison of the data with the previously generated time series forecast” (In response to a determination, a person can mentally evaluate a comparison of data and make a judgement to generate performance metrics from that comparison (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“A system, comprising: at least one processor; and a memory, storing program instructions that when executed by the at least one processor, cause the at least one processor to implement a time series forecasting system” (Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).)
“receive data” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“…by a machine learning model…” (This is mere instruction to apply the judicial exception using generic computer.)
“provide, via an interface of the time series forecasting system, the one or more performance metrics for the machine learning model.” (Adding insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional elements (v) recite use of a computer as a tool to perform the abstract idea, and (vii) recites mere instruction to apply the judicial exception using generic computer which are not indicative of significantly more. Additional elements (vi) & (viii) recite insignificant extra-solution activities. Further, element (vi) recites steps of receiving/transmitting data over a network, which has been found by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Further, element (viii) recites steps of presenting via an interface, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 2, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 2 recites the following additional mental processes:
“wherein the time series forecasting system is further configured to: detect an action event for the machine learning model based, at least in part, on the one or more performance metrics” (A person can mentally evaluate the performance metrics of a machine learning model and make a judgement to detect an action event for the model (MPEP 2106).)
“responsive to the detection of the action event: identify a responsive action for the machine learning model according to the detected action event” (A person can mentally evaluate the action event and make a judgement to identify a responsive action based on that (MPEP 2106).)
Further, claim 2 recites “a machine learning model” (In step2A, prong 2, this recites mere instruction to apply the judicial exception using generic computer) In step 2B, mere instruction to apply the judicial exception using generic computer is not indicative of significantly more.)
Further, claim 2 recites “cause performance of the responsive action for the machine learning model.” (In step 2A, prong 2, this recites mere application of the judicial exception (machine learning model) (MPEP 2106.05(f).) In step 2B, mere application of the judicial exception to perform an abstract idea is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 3, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 3 recites the following additional mental process:
“generate… a visualization of one of the one or more performance metrics” (A person can mentally evaluate a performance metric and make a judgement to generate a visualization of it (MPEP 2106).)
Further, claim 3 recites “wherein to provide the one or more performance metrics for the machine learning model, the time series forecasting system is configured to… display a visualization of one of the one or more performance metrics” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g).) In step 2B, this recites presenting via an interface. The courts have found steps of presenting via an interface to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 4, it is dependent upon claim 1, and thereby incorporates the limitations of, and corresponding analysis applied to claim 1. Further, claim 4 recites “wherein the time series forecasting system is a time series forecasting service implemented as part of a provider network, wherein the machine learning model is created in response to a request received at the time series forecasting service to create and host the machine learning model for generating one or more time series forecasts” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 5, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “A method”. A method is one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“generate a new time series forecast” (A person can mentally evaluate data of a time series and make a judgement to predict later data (MPEP 2106).)
“determining… that the data is associated with a previously generated time series forecast” (A person can mentally evaluate and make a judgement to… (MPEP 2106).)
“generating… one or more performance metrics for the machine learning model according to a comparison of the data with the previously generated time series forecast” (A person can mentally evaluate a comparison of data and make a judgement to generate performance metrics from that comparison (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“receiving, at a time series forecasting system, data” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“…by the time series forecasting system…” (This recites mere application of the judicial exception (time series forecasting system) (MPEP 2106.05(f)).)
“…by a machine learning model…” (This is mere instruction to apply the judicial exception using generic computer.)
“providing, by an interface of the time series forecasting system, the one or more performance metrics for the machine learning model.” (Adding insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional elements (iv) & (vii) recite insignificant extra-solution activities. Further, element (iv) recites steps of receiving/transmitting data over a network, which has been found by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Further, element (vii) recites steps of presenting via an interface, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Additional element (v) recites applying the use of the judicial exception to perform an abstract idea, which is not indicative of significantly more. Additional element (vi) recites mere instruction to apply the judicial exception using generic computer, which is not indicative of significantly more. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 6, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 6 recites “further comprising receiving a request, via the interface of the time series forecasting system to enable performance monitoring of the machine learning model” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g).) In step 2B, this recites transmitting/receiving data over a network, which the courts have found to be a well-understood, routine, and conventional activity (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362).
Further, claim 6 recites “wherein the determining, the generating, and the providing are enabled for performance by the time series forecasting system responsive to the request to enable performance monitoring” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 7, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 7 recites “wherein the data is received to generate the new time series forecasting using a second machine learning model different than the machine learning model” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 8, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 8 recites the following additional mental processes:
“detecting… an action event for the machine learning model based, at least in part, on the one or more performance metrics” (A person can mentally evaluate the performance metrics of a machine learning model and make a judgement to detect an action event for the model (MPEP 2106).)
“responsive to detecting the action event: identifying… a responsive action for the machine learning model according to the detected action event” (A person can mentally evaluate the action event and make a judgement to identify a responsive action based on that (MPEP 2106).)
Further, claim 8 recites “causing, by the time series forecasting system, performance of the responsive action for the machine learning model.” (In step 2A, prong 2, this recites mere application of the judicial exception (machine learning model) (MPEP 2106.05(f).) In step 2B, mere application of the judicial exception to perform an abstract idea is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 9, it is dependent upon claim 8, and thereby incorporates the limitations of, and corresponding analysis applied to claim 8. Further, claim 9 recites “wherein the responsive action is retraining the machine learning model based, at least in part, on the received data” (In step 2A, prong 2, merely retraining a machine learning model recites mere application of the judicial exception (machine learning model) (MPEP 2106.05(f).) In step 2B, mere application of the judicial exception to perform an abstract idea is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 10, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 10 recites “providing, via the interface of the time series forecasting system, a recommended action to perform for the machine learning model.” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g).) In step 2B, this recites presenting via an interface. The courts have found steps of presenting via an interface to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 11, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 11 recites “providing, via the interface of the time series forecasting system, a root cause explanation for the one or more performance metrics” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g).) In step 2B, this recites presenting via an interface. The courts have found steps of presenting via an interface to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 12, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 12 recites the following additional mental process:
“wherein one of the one or more performance metrics was defined analyzing performance of the machine learning model in a request received at the time series forecasting system.” (A person can mentally evaluate the performance of the machine learning model and make a judgement to define the performance metrics from it (MPEP 2106).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 13, it is dependent upon claim 5, and thereby incorporates the limitations of, and corresponding analysis applied to claim 5. Further, claim 13 recites the following additional mental process:
“generating… a visualization of one of the one or more performance metrics” (A person can mentally evaluate a performance metric and make a judgement to generate a visualization of it (MPEP 2106).)
Further, claim 13 recites “wherein providing the one or more performance metrics for the machine learning model comprises… displaying a visualization of one of the one or more performance metrics” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g).) In step 2B, this recites presenting via an interface. The courts have found steps of presenting via an interface to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 14, in Step 1 of the 101-analysis set forth in MPEP 2106, the claim recites “One or more non-transitory, computer-readable storage media, storing program instructions”. Non-transitory, computer-readable storage media is within one of the four statutory categories of invention.
In Step 2a Prong 1 of the 101-analysis set forth in the MPEP 2106, the examiner has determined that the following limitations recite a process that, under the broadest reasonable interpretation, covers a mental process but for recitation of generic computer components:
“generate a new time series forecast” (A person can mentally evaluate data of a time series and make a judgement to predict later data (MPEP 2106).)
“automatically identifying a previously generated time series forecast… that is associated with the data” (A person can mentally evaluate data and make a judgement to identify a previously generated time series forecast associated with it (MPEP 2106).)
“generating one or more performance metrics for the machine learning model according to a comparison of the data with the previously generated time series forecast” (A person can mentally evaluate a comparison of data and make a judgement to generate performance metrics from that comparison (MPEP 2106).)
If claim limitations, under their broadest reasonable interpretation, covers performance of the limitations as a mental process but for the recitation of generic computer components, then it falls within the mental process grouping of abstract ideas. According, the claim “recites” an abstract idea.
In Step 2a Prong 2 of the 101-analysis set forth in MPEP 2106, the examiner has
determined that the following additional elements do not integrate this judicial exception into a
practical application:
“One or more non-transitory, computer-readable storage media, storing program instructions that when executed on or across one or more computing devices cause the one or more computing devices to implement a time series forecasting system” (Uses a computer as a tool to perform an abstract idea (MPEP 2106.05(f)).)
“receiving data” (Adding insignificant extra-solution activity (mere data gathering) to the judicial exception (MPEP 2106.05(g)).)
“…by a machine learning model…” (This is mere instruction to apply the judicial exception using generic computer.)
“providing, via an interface of the time series forecasting system, the one or more performance metrics for the machine learning model” (Adding insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g)).)
Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is “directed” to an abstract idea.
In Step 2b of the 101-analysis set forth in the 2019 PEG, the examiner has determined that the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, additional elements (iv) recite use of a computer as a tool to perform the abstract idea, and (vi) recites mere instruction to apply the judicial exception using generic computer, which are not indicative of significantly more. Additional elements (v) & (vii) recite insignificant extra-solution activities. Further, element (v) recites steps of receiving/transmitting data via a network, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362). Further, element (vii) recites steps of presenting via an interface, which has been determined by the courts to recite a well-understood, routine, and conventional activity, which is not indicative of significantly more (Presenting offers and gathering statistics, OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93). Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 15, it is dependent upon claim 14, and thereby incorporates the limitations of, and corresponding analysis applied to claim 14. Further, claim 15 recites the following additional mental processes:
“detecting an action event for the machine learning model based, at least in part, on the one or more performance metrics” (A person can mentally evaluate the performance metrics of a machine learning model and make a judgement to detect an action event for the model (MPEP 2106).)
“responsive to detecting the action event: identifying a responsive action for the machine learning model according to the detected action event” (A person can mentally evaluate the action event and make a judgement to identify a responsive action based on that (MPEP 2106).)
Further, claim 15 recites “causing performance of the responsive action for the machine learning model.” (In step 2A, prong 2, this recites mere application of the judicial exception (machine learning model) (MPEP 2106.05(f).) In step 2B, mere application of the judicial exception to perform an abstract idea is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claim 16, it is dependent upon claim 15, and thereby incorporates the limitations of, and corresponding analysis applied to claim 15. Further, claim 16 recites “wherein the responsive action is sending an alert with respect to performance of the machine learning model” (In step 2A, prong 2, this recites insignificant extra-solution activity (mere data output) to the judicial exception (MPEP 2106.05(g).) In step 2B, this recites transmitting/receiving data over a network, which the courts have found to be a well-understood, routine, and conventional activity (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362).)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Regarding claims 17-19, they are dependent upon claim 14, and thereby incorporate the limitations of, and corresponding analysis applied to claim 14. Further, claims 17-19 recite similar additional limitations as claims 10-11, & 13, respectively, and are rejected under the same rationale.
Regarding claim 20, it is dependent upon claim 14, and thereby incorporates the limitations of, and corresponding analysis applied to claim 14. Further, claim 20 recites “wherein the time series forecasting system is implemented as part of an image or container for execution on a virtual compute system that is implemented as part of a provider network” (In step 2a, prong 2, this recites generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h).) In step 2B, generally linking the use of the judicial exception to a particular technological environment or field of use is not indicative of significantly more.)
Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 5, 7, 11-14, & 18-19 are rejected under 35 U.S.C. 102(a)(1) as being clearly anticipated by Dasgupta, S. et al. WIPO PCT IPO No. WO 2021/021271 A9 (hereafter, DASGUPTA.
Regarding claim 5, DASGUPTA teaches “receiving, at a time series forecasting system, data to generate a new time series forecast” [Figure 6]
Step 605-610 illustrates receiving time-series data and providing it to a forecasting model, as further detailed in ([Page 14, Lines 7-15] “Figure 6 depicts a method 600 of operating diagnostics framework system for large scale hierarchical time-series forecasting models. (I will be citing Figure 6 and method 600 throughout this rejection.)
At 605, one or more hierarchical time-series are provided. The hierarchical time-series data structure in this context may include structures similar to those depicted in Figure lA or lB and the related description above.
At 610, node data from the plurality of hierarchical time-series are provided to a forecasting model. In some embodiments, these forecasting models may be one or more of stream forecasting model 204, aggregation forecasting model 214, and top-level forecasting model 224.”)
Further, DASGUPTA teaches “determining, by the time series forecasting system, that the data is associated with a previously generated time series forecast by a machine learning model” ([Page 14, Lines 12-15] “At 610, node data from the plurality of hierarchical time-series are provided to a forecasting model. In some embodiments, these forecasting models may be one or more of stream forecasting model 204, aggregation forecasting model 214, and top-level forecasting 15 model 224. (This citation shows various models that the data was sent to, and that are thus, used.)
At 615, a plurality of forecasting data corresponding to each node provided from the plurality of hierarchical time-series is concurrently calculated.”) Further in the reference at ([Page 16, Lines 14-15] “As used herein, the term "concurrently" or "concurrent'' refers to actions taken in parallel that do not need to necessarily begin or end at the same time.”) As such, for step 615-635 of method 600 to work, including generating performance metrics using all the models, when the usage of said models do not have to “begin or end at the same time”, a determination that “the data is associated with a previously generated time-series forecast by a machine learning model” must be made.
Further, DASGUPTA teaches “generating, by the time series forecasting system, one or more performance metrics for the machine learning model according to a comparison of the data with the previously generated time series forecast” ([Figure 6, step 620, & Page 14, Lines 18-19] “At 620, performance metrics of the forecasting model generating the forecasting data is concurrently (meaning they do not have to begin or end at the same time, and thus, some could be “previously generated”) calculated”)
And further, we can see that the metrics are generated, according to a comparison of the models at ([Page 10, Lines 8-18] “In embodiments, stream metrics 242 for particular stream forecast data 206, such as based upon stream node data 120 of Figure 1R are normalized so that they may be compared to stream metrics 242 of stream forecast data 206n based upon data from other stream nodes, such as stream node data 12l of hierarchical time-series 100, or stream node data 120, from single or multiple hierarchical time-series 100 of Figure lB. By normalizing multiple stream metrics 242 that are de1ived from different stream forecast data 206, the stream forecasting model 204 may be evaluated, and as appropriate, modified.
Factor metrics 244 are computed disambiguate various effects contributing to the
north-star metric 248 discussed below. They are applied to diagnose model performance for
root causes of poor model performance of the stream forecasting model 204, the aggregation
forecasting model 214, and/or the top-level aggregation model 224. (As cited above, these are the models used in method 600.)”)
Further, DASGUPTA teaches “providing, by an interface of the time series forecasting system, the one or more performance metrics for the machine learning model” ([Page 14, Lines 24-25] “Finally, at 670, data, aggregation, models, metrics, and results may be displayed to a user via diagnostic dashboard.”)
Regarding claim 7, DASGUPTA teaches the limitations of claim 5. Further, DASGUPTA teaches “wherein the data is received to generate the new time series forecasting using a second machine learning model different than the machine learning model” [Figure 6]
Regardless of the model(s) used in step 610, step 625 generates a new model which is used in steps thereafter to generate forecasts.
Regarding claim 11, DASGUPTA teaches the limitations of claim 5. Further, DASGUPTA teaches “providing, via the interface of the time series forecasting system, a root cause explanation for the one or more performance metrics” ([Page 10, Lines 15-21] “Factor metrics 244 are computed disambiguate various effects contributing to the north-star metric 248 discussed below. They are applied to diagnose model performance for root causes of poor model performance of the stream forecasting model 204, the aggregation forecasting model 214, and/or the top-level aggregation model 224. Factor metrics 244 determine root causes from factors such as error in bias estimation (including, raw bias error 20 and magnitude of bias error), error in variance estimation, and sharpness of the predicted distribution.”
And further, ([Page 14, lines 3-6] “In yet further embodiments, diagnostics framework 200 includes a dashboard 254 (a user interface), upon which the hierarchical time-series and/or their nodes, models, forecast data, or metrics may be displayed to a user, enabling the user to modify, perform operations upon, or combine any of these.”)
Regarding claim 12, DASGUPTA teaches the limitations of claim 5. Further, DASGUPTA teaches “wherein one of the one or more performance metrics was defined analyzing performance of the machine learning model in a request received at the time series forecasting system” [Figure 6]
Step 620 shows the calculation of the performance metrics., further defined in ([Figure 6, step 620, & Page 14, Lines 18-19] “At 620, performance metrics of the forecasting model generating the forecasting data is concurrently calculated”)
And further, we can see that the metrics are generated, according to a comparison of the models at ([Page 10, Lines 8-18] “In embodiments, stream metrics 242 for particular stream forecast data 206, such as based upon stream node data 120 of Figure 1R are normalized so that they may be compared to stream metrics 242 of stream forecast data 206n based upon data from other stream nodes, such as stream node data 12l of hierarchical time-series 100, or stream node data 120, from single or multiple hierarchical time-series 100 of Figure lB. By normalizing multiple stream metrics 242 that are de1ived from different stream forecast data 206, the stream forecasting model 204 may be evaluated, and as appropriate, modified.
Factor metrics 244 are computed disambiguate various effects contributing to the
north-star metric 248 discussed below. They are applied to diagnose model performance for
root causes of poor model performance of the stream forecasting model 204, the aggregation
forecasting model 214, and/or the top-level aggregation model 224.”)
Further, the fact that a “user” sends input into the service to cause these methods to happen, signifies a “request” to cause the features, including generation and/or defining of performance metrics.
Regarding claim 13, DASGUPTA teaches the limitations of claim 5. Further, DASGUPTA teaches “wherein providing the one or more performance metrics for the machine learning model comprises generating and displaying a visualization of one of the one or more performance metrics.” ([Page 14, Lines 3-6] “In yet further embodiments, diagnostics framework 200 includes a dashboard 254 (a user interface/display), upon which the hierarchical time-series and/or their nodes, models, forecast data, or metrics may be displayed to a user, enabling the user to modify, perform operations upon, or combine any of these.”)
And further, an example visualization can be seen in Figure 5.
Regarding claim 14, DASGUPTA teaches “One or more non-transitory, computer-readable storage media, storing program instructions that when executed on or across one or more computing devices cause the one or more computing devices to implement a time series forecasting system” ([Pages 4-5, lines 26-30 & 1-5, respectively] “Other embodiments provide a non-transitory computer-readable medium comprising instructions that when executed by a processor of a processing system, cause the processing system to perform a method of evaluating performance of a system of models of a hierarchical time-series, (From this point on, details an implementation of a time-series forecasting system) comprising: providing a plurality hierarchical time-series each of the plurality of hierarchical time-series comprising node data; concurrently providing node data from the plurality of hierarchical time-series to a forecasting model; using the forecasting model, concurrently calculating a plurality of forecasting data corresponding to each one of the node data of the plurality of hierarchical time-series; concurrently calculating a plurality of performance metrics of the forecasting model using the plurality of forecasting data; generating an updated forecasting model by modifying the forecasting model based upon the plurality of performance metrics; concurrently calculating a plurality of updated forecasting data corresponding to each one of the node data using the updated forecasting model; and providing the update forecasting data to a user.”)
Further, DASGUPTA teaches “receiving data to generate a new time series forecast” [Figure 6]
Step 605-610 illustrates receiving time-series data and providing it to a forecasting model, as further detailed in ([Page 14, Lines 7-15] “Figure 6 depicts a method 600 of operating diagnostics framework system for large scale hierarchical time-series forecasting models. (I will be citing Figure 6 and method 600 throughout this rejection.)
At 605, one or more hierarchical time-series are provided. The hierarchical time-series data structure in this context may include structures similar to those depicted in Figure lA or lB and the related description above.
At 610, node data from the plurality of hierarchical time-series are provided to a forecasting model. In some embodiments, these forecasting models may be one or more of stream forecasting model 204, aggregation forecasting model 214, and top-level forecasting model 224.”)
Further, DASGUPTA teaches “automatically identifying a previously generated time series forecast by a machine learning model that is associated with the data” ([Page 14, Lines 12-15] “At 610, node data from the plurality of hierarchical time-series are provided to a forecasting model. In some embodiments, these forecasting models may be one or more of stream forecasting model 204, aggregation forecasting model 214, and top-level forecasting 15 model 224. (This citation shows various models that the data was sent to, and that are thus, used.)
At 615, a plurality of forecasting data corresponding to each node provided from the plurality of hierarchical time-series is concurrently calculated.”) Further in the reference at ([Page 16, Lines 14-15] “As used herein, the term "concurrently" or "concurrent'' refers to actions taken in parallel that do not need to necessarily begin or end at the same time.”) As such, for step 615-635 of method 600 to work, including generating performance metrics using all the models, when the usage of said models do not have to “begin or end at the same time”, a determination that “the data is associated with a previously generated time-series forecast by a machine learning model” must be made.
Further, DASGUPTA teaches “generating one or more performance metrics for the machine learning model according to a comparison of the data with the previously generated time series forecast” ([Figure 6, step 620, & Page 14, Lines 18-19] “At 620, performance metrics of the forecasting model generating the forecasting data is concurrently (meaning they do not have to begin or end at the same time, and thus, some could be “previously generated”) calculated”)
And further, we can see that the metrics are generated, according to a comparison of the models at ([Page 10, Lines 8-18] “In embodiments, stream metrics 242 for particular stream forecast data 206, such as based upon stream node data 120 of Figure 1R are normalized so that they may be compared to stream metrics 242 of stream forecast data 206n based upon data from other stream nodes, such as stream node data 12l of hierarchical time-series 100, or stream node data 120, from single or multiple hierarchical time-series 100 of Figure lB. By normalizing multiple stream metrics 242 that are de1ived from different stream forecast data 206, the stream forecasting model 204 may be evaluated, and as appropriate, modified.
Factor metrics 244 are computed disambiguate various effects contributing to the
north-star metric 248 discussed below. They are applied to diagnose model performance for
root causes of poor model performance of the stream forecasting model 204, the aggregation
forecasting model 214, and/or the top-level aggregation model 224. (As cited above, these are the models used in method 600.)”)
Further, DASGUPTA teaches “providing, via an interface of the time series forecasting system, the one or more performance metrics for the machine learning model” ([Page 14, Lines 24-25] “Finally, at 670, data, aggregation, models, metrics, and results may be displayed to a user via diagnostic dashboard.”)
Regarding claim 18, DASGUPTA teaches the limitations of claim 14. Further, claim 18 recites similar additional limitations as claim 11, and is rejected under the same rationale.
Regarding claim 19, DASGUPTA teaches the limitations of claim 14. Further, DASGUPTA teaches “wherein, in providing the one or more performance metrics for the machine learning model, the program instructions cause the one or more computing devices to implement generating and displaying a visualization of one of the one or more performance metrics” ([Page 14, Lines 3-6] “In yet further embodiments, diagnostics framework 200 includes a dashboard 254 (a user interface/display), upon which the hierarchical time-series and/or their nodes, models, forecast data, or metrics may be displayed to a user, enabling the user to modify, perform operations upon, or combine any of these.”)
And further, an example visualization can be seen in Figure 5.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 6, 8-9, 15-16, & 20 are rejected under 35 U.S.C. 103 as being unpatentable over DASGUPTA, and further in view of Bugdayci, et al. Patent PGPUB No. US 2022/0237102 A1 (hereafter, BUGDAYCI).
Regarding claim 1, DASGUPTA teaches “A system, comprising: at least one processor; and a memory, storing program instructions that when executed by the at least one processor, cause the at least one processor to implement a time series forecasting system” ([Page 3, Lines 6-12] “Other embodiments provide a system for evaluating performance of a system of models of a hierarchical time-series, comprising a memory comprising computer-readable instructions; a plurality of hierarchical time-series each of the plurality of hierarchical timeseries comprising a node, each node comprising node data; a forecasting model: a plurality of performance metrics; a processor configured to calculate concurrently a plurality of forecasting data using node data corresponding to a node, each one of the plurality of forecasting data corresponding to a respective node;”)
Further, DASGUPTA teaches “receive data to generate a new time series forecast”
[Figure 6]
Step 605-610 illustrates receiving time-series data and providing it to a forecasting model, as further detailed in ([Page 14, Lines 7-15] “Figure 6 depicts a method 600 of operating diagnostics framework system for large scale hierarchical time-series forecasting models. (I will be citing Figure 6 and method 600 throughout this rejection.)
At 605, one or more hierarchical time-series are provided. The hierarchical time-series data structure in this context may include structures similar to those depicted in Figure lA or lB and the related description above.
At 610, node data from the plurality of hierarchical time-series are provided to a forecasting model. In some embodiments, these forecasting models may be one or more of stream forecasting model 204, aggregation forecasting model 214, and top-level forecasting model 224.”)
Further, DASGUPTA teaches “determine that the data is associated with a previously generated time series forecast by a machine learning model” ([Page 14, Lines 12-15] “At 610, node data from the plurality of hierarchical time-series are provided to a forecasting model. In some embodiments, these forecasting models may be one or more of stream forecasting model 204, aggregation forecasting model 214, and top-level forecasting 15 model 224. (This citation shows various models that the data was sent to, and that are thus, used.)
At 615, a plurality of forecasting data corresponding to each node provided from the plurality of hierarchical time-series is concurrently calculated.”) Further in the reference at ([Page 16, Lines 14-15] “As used herein, the term "concurrently" or "concurrent'' refers to actions taken in parallel that do not need to necessarily begin or end at the same time.”) As such, for step 615-635 of method 600 to work, including generating performance metrics using all the models, when the usage of said models do not have to “begin or end at the same time”, a determination that “the data is associated with a previously generated time-series forecast by a machine learning model” must be made.
Further, DASGUPTA teaches “…generate one or more performance metrics for the machine learning model according to a comparison of the data with the previously generated time series forecast” ([Figure 6, step 620, & Page 14, Lines 18-19] “At 620, performance metrics of the forecasting model generating the forecasting data is concurrently (meaning they do not have to begin or end at the same time, and thus, some could be “previously generated”) calculated”)
And further, we can see that the metrics are generated, according to a comparison of the models at ([Page 10, Lines 8-18] “In embodiments, stream metrics 242 for particular stream forecast data 206, such as based upon stream node data 120 of Figure 1R are normalized so that they may be compared to stream metrics 242 of stream forecast data 206n based upon data from other stream nodes, such as stream node data 12l of hierarchical time-series 100, or stream node data 120, from single or multiple hierarchical time-series 100 of Figure lB. By normalizing multiple stream metrics 242 that are de1ived from different stream forecast data 206, the stream forecasting model 204 may be evaluated, and as appropriate, modified.
Factor metrics 244 are computed disambiguate various effects contributing to the
north-star metric 248 discussed below. They are applied to diagnose model performance for
root causes of poor model performance of the stream forecasting model 204, the aggregation
forecasting model 214, and/or the top-level aggregation model 224. (As cited above, these are the models used in method 600.)”)
Further, DASGUPTA teaches “provide, via an interface of the time series forecasting system, the one or more performance metrics for the machine learning model” ([Page 14, Lines 24-25] “Finally, at 670, data, aggregation, models, metrics, and results may be displayed to a user via diagnostic dashboard.”)
Further, DASGUPTA fails to explicitly teach “determine that model monitoring is enabled for the machine learning model; responsive to the determination that model monitoring is enabled for the machine learning model:…”
However, analogous art of another time-series anomaly/metric detection forecasting service, BUGDAYCI, does teach this ([0105] “Software instructions (also referred to as instructions) are capable of causing (also referred to as operable to cause and configurable to cause) a set of processors to perform operations when the instructions are executed by the set of processors. The phrase "capable of causing" (and synonyms mentioned above) includes various scenarios (or combinations thereof), such as instructions that are always executed versus instructions that may be executed. For example, instructions may be executed: 1) only in certain situations when the larger program is executed (e.g., a condition is fulfilled in the larger program; an event occurs such as a software or hardware interrupt, user input (e.g., a keystroke, a mouse-click, a voice command); a message is published, etc.); or 2) when the instructions are called by another program or part thereof (whether or not executed in the same or a different process, thread, lightweight thread, etc.). These scenarios may or may not require that a larger program, of which the instructions are a part, be currently configured to use those instructions (e.g., may or may not require that a user enables a feature, the feature or instructions be unlocked or enabled, the larger program is configured using data and the program's inherent functionality, etc.). As shown by these exemplary scenarios, "capable of causing" (and synonyms mentioned above) does not require "causing" but the mere capability to cause. While the term "instructions" may be used to refer to the instructions that when executed cause the performance of the operations described herein, the term may or may not also refer to other instructions that a program may include. Thus, instructions, code, program, and software are capable of causing operations when executed, whether the operations are always performed or sometimes performed (e.g., in the scenarios described previously). The phrase "the instructions when executed" refers to at least the instructions that when executed cause the performance of the operations described herein but may or may not refer to the execution of the other instructions.”) This citation illustrates that various features of the method and system can be enabled/disabled, and thus, when combined with DASGUPTA, would result in the invention as claimed, where various features (including model monitoring) could be enable/disabled, thus resulting in those processes only running if the feature is enabled.
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of DASGUPTA with the teachings of BUGDAYCI because both references teach similar methods and systems of measuring the performance of time-series forecasting models via metrics, etc.
One of ordinary skill in the art would be motivated to do so because being able to enable/disable the monitoring of specific models will allow more user control as some models will be able to be disabled for monitoring, freeing resources to monitor others instead.
Regarding claim 2, DASGUPTA in view of BUGDAYCI teaches the limitations of claim 1. Further, BUGDAYCI teaches “wherein the time series forecasting system is further configured to: detect an action event for the machine learning model based, at least in part, on the one or more performance metrics” ([Abstract] “Systems and methods are described for applying a plurality of data points of a time series data set representing values of a metric measuring performance of a cloud computing service to a machine learning model to predict a forecast of a most likely value of the metric at a selected future time. The method includes determining whether the plurality of data points of the time series data set are anomalies according to the machine learning model and the forecast and generating a collective anomaly from the anomalies when the plurality of data points is determined to be anomalies. The method further includes determining whether the collective anomaly does not meet one or more cloud computing service level objective (SLO) threshold requirements (determining the anomaly correlates to the detection of an action event, based at least in part on the performance metrics) and sending an alert (a responsive action) when the collective anomaly does not meet one or more cloud computing SLO threshold requirements.”)
Further, BUGDAYCI teaches “responsive to the detection of the action event: identify a responsive action for the machine learning model according to the detected action event; and cause performance of the responsive action for the machine learning model” ([Abstract] “Systems and methods are described for applying a plurality of data points of a time series data set representing values of a metric measuring performance of a cloud computing service to a machine learning model to predict a forecast of a most likely value of the metric at a selected future time. The method includes determining whether the plurality of data points of the time series data set are anomalies according to the machine learning model and the forecast and generating a collective anomaly from the anomalies when the plurality of data points is determined to be anomalies. The method further includes determining whether the collective anomaly does not meet one or more cloud computing service level objective (SLO) threshold requirements and sending an alert when the collective anomaly does not meet one or more cloud computing SLO threshold requirements.” (sending the alert constitutes identifying and causing performance of an appropriate responsive action to take for the machine learning model))
Regarding claim 3, DASGUPTA in view of BUGDAYCI teaches the limitations of claim 1. Further, DASGUPTA teaches “wherein to provide the one or more performance metrics for the machine learning model, the time series forecasting system is configured to generate and display a visualization of one of the one or more performance metrics” ([Page 14, Lines 3-6] “In yet further embodiments, diagnostics framework 200 includes a dashboard 254 (a user interface/display), upon which the hierarchical time-series and/or their nodes, models, forecast data, or metrics may be displayed to a user, enabling the user to modify, perform operations upon, or combine any of these.”)
And further, an example visualization can be seen in Figure 5.
Regarding claim 4, DASGUPTA in view of BUGDAYCI teaches the limitations of claim 1. Further, DASGUPTA teaches “wherein the time series forecasting system is a time series forecasting service implemented as part of a provider network” ([Page 17, Lines 7-9] “One of skill in the art will appreciate that one or more components coupled by the bus may be alternatively coupled via a network (e.g., for full or partial implementations of a processing system in a distributed or cloud environment)”)
Further, DASGUPTA teaches “wherein the machine learning model is created in response to a request received at the time series forecasting service to create and host the machine learning model for generating one or more time series forecasts” [Figure 6]
Step 625 shows the generation of a machine learning model created and hosted at the time series forecasting service to generate time series forecasts. Further, the fact that a “user” sends input into the service to cause these methods to happen, signifies a “request” to cause the new model to be generated.
Regarding claim 6, DASGUPTA teaches the limitations of claim 5. Further, DASGUPTA fails to explicitly teach “receiving a request, via the interface of the time series forecasting system to enable performance monitoring of the machine learning model, wherein the determining, the generating, and the providing are enabled for performance by the time series forecasting system responsive to the request to enable performance monitoring.”
However, analogous art, BUGDAYCI, does teach this ([0105] “Software instructions (also referred to as instructions) are capable of causing (also referred to as operable to cause and configurable to cause) a set of processors to perform operations when the instructions are executed by the set of processors. The phrase "capable of causing" (and synonyms mentioned above) includes various scenarios (or combinations thereof), such as instructions that are always executed versus instructions that may be executed. For example, instructions may be executed: 1) only in certain situations when the larger program is executed (e.g., a condition is fulfilled in the larger program; an event occurs such as a software or hardware interrupt, user input (e.g., a keystroke, a mouse-click, a voice command); a message is published, etc.); or 2) when the instructions are called by another program or part thereof (whether or not executed in the same or a different process, thread, lightweight thread, etc.). These scenarios may or may not require that a larger program, of which the instructions are a part, be currently configured to use those instructions (e.g., may or may not require that a user enables a feature (a user enabling the feature is a request to enable the feature), the feature or instructions be unlocked or enabled, the larger program is configured using data and the program's inherent functionality, etc.). As shown by these exemplary scenarios, "capable of causing" (and synonyms mentioned above) does not require "causing" but the mere capability to cause. While the term "instructions" may be used to refer to the instructions that when executed cause the performance of the operations described herein, the term may or may not also refer to other instructions that a program may include. Thus, instructions, code, program, and software are capable of causing operations when executed, whether the operations are always performed or sometimes performed (e.g., in the scenarios described previously). The phrase "the instructions when executed" refers to at least the instructions that when executed cause the performance of the operations described herein but may or may not refer to the execution of the other instructions.”) This citation shows that features and methods of the system can be enabled or disabled, and when combined with DASGUPTA, would result in the ability to enable/disable performance monitoring, as claimed, and thus instructions to determine, generate, and provide would only process if enabled.
Regarding claim 8, DASGUPTA teaches the limitations of claim 5. DASGUPTA fails to explicitly teach “detecting, by the time series forecasting system, an action event for the machine learning model based, at least in part, on the one or more performance metrics; responsive to detecting the action event: identifying, by the time series forecasting system, a responsive action for the machine learning model according to the detected action event; and causing, by the time series forecasting system, performance of the responsive action for the machine learning model.”
However, analogous art, BUGDAYCI teaches “detecting, by the time series forecasting system, an action event for the machine learning model based, at least in part, on the one or more performance metrics” ([Abstract] “Systems and methods are described for applying a plurality of data points of a time series data set representing values of a metric measuring performance of a cloud computing service to a machine learning model to predict a forecast of a most likely value of the metric at a selected future time. The method includes determining whether the plurality of data points of the time series data set are anomalies according to the machine learning model and the forecast and generating a collective anomaly from the anomalies when the plurality of data points is determined to be anomalies. The method further includes determining whether the collective anomaly does not meet one or more cloud computing service level objective (SLO) threshold requirements (determining the anomaly constitutes detecting an action event based at least in part on the performance metrics) and sending an alert (a responsive action) when the collective anomaly does not meet one or more cloud computing SLO threshold requirements.”)
Further, BUGDAYCI teaches “responsive to detecting the action event: identifying, by the time series forecasting system, a responsive action for the machine learning model according to the detected action event; and causing, by the time series forecasting system, performance of the responsive action for the machine learning model” ([Abstract] “Systems and methods are described for applying a plurality of data points of a time series data set representing values of a metric measuring performance of a cloud computing service to a machine learning model to predict a forecast of a most likely value of the metric at a selected future time. The method includes determining whether the plurality of data points of the time series data set are anomalies according to the machine learning model and the forecast and generating a collective anomaly from the anomalies when the plurality of data points is determined to be anomalies. The method further includes determining whether the collective anomaly does not meet one or more cloud computing service level objective (SLO) threshold requirements and sending an alert when the collective anomaly does not meet one or more cloud computing SLO threshold requirements.” (sending an alert constitutes identifying and causing performance of an appropriate responsive action to take for the machine learning model))
Regarding claim 9, DASGUPTA in view of BUGDAYCI teaches the limitations of claim 8. Further, BUGDAYCI teaches “wherein the responsive action is retraining the machine learning model based, at least in part, on the received data” ([0099, Sentence 2, onward] “At block 1122, the system administrator receives the alert. If the system administrator at block 1124 decides that this alert was not legitimate/valid (e.g., the alert is a false alarm) (an action event) this means the model 612 was not trained on the best possible scenario, and the machine learning system of log error and metrics quality analyzer 504 needs to receive this information to retrain the model in order to make better decisions in the future (responsive action to retrain). At this point, model 612 ignores the cause of the failure (e.g., bad data points that resulted in this decision), retrains confidence ranges for discovered point anomalies (e.g., for the current organization) by continuing processing with the next data point at block 1110 via connector llC.”)
Regarding claim 15, DASGUPTA teaches the limitations of claim 14. DASGUPTA fails to explicitly teach “detecting an action event for the machine learning model based, at least in part, on the one or more performance metrics; responsive to detecting the action event: identifying a responsive action for the machine learning model according to the detected action event; and causing performance of the responsive action for the machine learning model.”
However, analogous art, BUGDAYCI teaches “detecting an action event for the machine learning model based, at least in part, on the one or more performance metrics” ([Abstract] “Systems and methods are described for applying a plurality of data points of a time series data set representing values of a metric measuring performance of a cloud computing service to a machine learning model to predict a forecast of a most likely value of the metric at a selected future time. The method includes determining whether the plurality of data points of the time series data set are anomalies according to the machine learning model and the forecast and generating a collective anomaly from the anomalies when the plurality of data points is determined to be anomalies. The method further includes determining whether the collective anomaly does not meet one or more cloud computing service level objective (SLO) threshold requirements (determining an anomaly constitutes detecting an action event based at least in part on the performance metrics) and sending an alert (a responsive action) when the collective anomaly does not meet one or more cloud computing SLO threshold requirements.”)
Further, BUGDAYCI teaches “responsive to detecting the action event: identifying a responsive action for the machine learning model according to the detected action event; and causing performance of the responsive action for the machine learning model.” ([Abstract] “Systems and methods are described for applying a plurality of data points of a time series data set representing values of a metric measuring performance of a cloud computing service to a machine learning model to predict a forecast of a most likely value of the metric at a selected future time. The method includes determining whether the plurality of data points of the time series data set are anomalies according to the machine learning model and the forecast and generating a collective anomaly from the anomalies when the plurality of data points is determined to be anomalies. The method further includes determining whether the collective anomaly does not meet one or more cloud computing service level objective (SLO) threshold requirements and sending an alert when the collective anomaly does not meet one or more cloud computing SLO threshold requirements.” (sending an alert constitutes identifying and causing performance of an appropriate responsive action to take for the machine learning model))
Regarding claim 16, DASGUPTA in view of BUGDAYCI teaches the limitations of claim 15. Further, BUGDAYCI teaches “wherein the responsive action is sending an alert with respect to performance of the machine learning model” ([Abstract] “Systems and methods are described for applying a plurality of data points of a time series data set representing values of a metric measuring performance of a cloud computing service to a machine learning model to predict a forecast of a most likely value of the metric at a selected future time. The method includes determining whether the plurality of data points of the time series data set are anomalies according to the machine learning model and the forecast and generating a collective anomaly from the anomalies when the plurality of data points is determined to be anomalies. The method further includes determining whether the collective anomaly does not meet one or more cloud computing service level objective (SLO) threshold requirements and sending an alert when the collective anomaly does not meet one or more cloud computing SLO threshold requirements.”
Regarding claim 20, DASGUPTA teaches the limitations of claim 14. Further, DASGUPTA teaches “wherein the time series forecasting system is implemented… as part of a provider network” ([Page 17, Lines 7-9] “One of skill in the art will appreciate that one or more components coupled by the bus may be alternatively coupled via a network (e.g., for full or partial implementations of a processing system in a distributed or cloud environment)”)
DASGUPTA fails to explicitly teach “wherein the time series forecasting system is implemented as part of an image or container for execution on a virtual compute system”
However, analogous art, BUGDAYCI does teach this ([0109] “During operation, an instance of the software 1228 (illustrated as instance 1206 and referred to as a software instance; and in the more specific case of an application, as an application instance) is executed. In electronic devices that use compute virtualization, the set of one or more processor(s) 1222 typically execute software to instantiate a virtualization layer 1208 and one or more software container(s) 1204A 1204R (e.g., with operating system-level virtualization, the virtualization layer 1208 may represent a container engine (such as Docker Engine by Docker, Inc. or rkt in Container Linux by Red Hat, Inc.) running on top of (or integrated into) an operating system, and it allows for the creation of multiple software containers 1204A-1204R (representing separate user space instances and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; with full virtualization, the virtualization layer 1208 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and the software containers 1204A 1204R each represent a tightly isolated form of a software container called a virtual machine that is run by the hypervisor and may include a guest operating system; with para-virtualization, an operating system and/or application running with a virtual machine may be aware of the presence of virtualization for optimization purposes). Again, in electronic devices where compute virtualization is used, during operation, an instance of the software 1228 is executed within the software container 1204A on the virtualization layer 1208. In electronic devices where compute virtualization is not used, the instance 1206 on top of a host operating system is executed on the "bare metal" electronic device 1200. The instantiation of the instance 1206, as well as the virtualization layer 1208 and software containers 1204A- 1204R if implemented, are collectively referred to as software instance(s) 1202.”)
Claims 10 & 17 are rejected under 35 U.S.C. 103 as being unpatentable over DASGUPTA, as applied to claims above, and further in view of Deka, P. et al. “Adversarial Impact on Anomaly Detection in Cloud Datacenters.” Available at https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8952139 on January 9 2020 (hereafter, DEKA).
Regarding claim 10, DASGUPTA teaches the limitations of claim 5. Further, DASGUPTA teaches “via the interface of the time series forecasting system” ([Page 14, Lines 3-6] “In yet further embodiments, diagnostics framework 200 includes a dashboard 254 (a user interface/display), upon which the hierarchical time-series and/or their nodes, models, forecast data, or metrics may be displayed to a user, enabling the user to modify, perform operations upon, or combine any of these.”)
DASGUPTA fails to explicitly teach “providing… a recommended action to perform for the machine learning model.” However, analogous art of a method for detecting anomalies for cloud datacenters, DEKA, does teach this ([III. Learning Framework, paragraph 2] “In real-time, we collect different metric streams (e.g., latency, CPU, memory, disk I/O) from the cloud datacenters for detecting anomalies at scale and control them either taking an autonomous decision or recommend an intelligent action. The anomaly detection system (ADS) learns based on the training data with and without poisoning attacks and predict the test data as anomalous or not. If any anomalous event or instance is found, it generates an alarm with an associated recommended action and immediately sends for postmortem.”) This reference detects anomalies and generates recommended actions to send to the user. While it does not specify “how” it sends the recommended actions to the user (e.g. “via the interface”), the combination of this reference with DASGUPTA would result in the limitation as claimed.
It would be obvious to one of ordinary skill in the art, prior to the effective filing date of the claimed invention, to combine the base reference of DASGUPTA with the teachings of DEKA because both references explore the monitoring of machine learning models, analysis via metrics, and the detection of anomalies.
One of ordinary skill in the art would be motivated to do so because receiving a recommended action based on data improves efficiency when compared to a person having to manually evaluate the results to determine an appropriate action, as well as allowing the user to prevent an incorrect action being performed if the analysis is deemed incorrect.
Regarding claim 17, DASGUPTA teaches the limitations of claim 14. Further, claim 17 recites similar additional limitations as claim 10 and is rejected under the same rationale.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW LEE LEWIS whose telephone number is (571)272-1906. The examiner can normally be reached Monday: 12:00PM - 4:00PM and Tuesday - Friday: 12:00PM - 9PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Matthew Lee Lewis/Examiner, Art Unit 2144
/TAMARA T KYLE/Supervisory Patent Examiner, Art Unit 2144