Prosecution Insights
Last updated: April 19, 2026
Application No. 17/854,482

TIME-VARYING FEATURES VIA METADATA

Final Rejection §101§103
Filed
Jun 30, 2022
Examiner
LEE, MICHAEL CHRISTOPHER
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
2 (Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
86%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
80 granted / 136 resolved
+3.8% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
54 currently pending
Career history
190
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
12.3%
-27.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s Amendment and remarks dated 11/25/2025 have been considered. Claims 5, 12, and 19 are cancelled. Claims 1-4, 6-11, 13-18, and 20 are pending. Drawing Objections. The previous rejections to the drawings are withdrawn in view of Applicant’s substitute specification. However, new objections to the drawings are provided below. Specification Objections. The objections to the specification are withdrawn in view of the substitute specification provided by Applicant. However, new objections to the specification are provided below. Response to Arguments On page 10 of Applicant’s 11/25/2025 Amendment and remarks, Applicant asserts that no new matter has been added via the amendments to the claims. The examiner agrees that at least original claims 5, 12, and 19, together with paras. 0051 and 0069, provide sufficient written description support for the claim amendments. On page 15 of Applicant’s 11/25/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 101, Applicant argues that claim 1 “is not drafted in such a way as to monopolize any potential judicial exception as there are many ways in which to manipulate data.” The examiner respectfully disagrees. The “additional elements” analyzed under Step 2A, Prong 2, do not provide “meaningful limits on the judicial exceptions” identified under Step 2A, Prong 1. See MPEP 2106.04(d). The examiner respectfully submits that the “additional elements” in claim 1 are merely generic computing components, data gathering steps, and field-of-use limitations that do not provide meaningful limits on the judicial exceptions. On page 15 of Applicant’s 11/25/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 101, with respect to Step 2A, Prong 2, Applicant argues that the claim recites a “specific improvement over prior art systems.” PNG media_image1.png 338 642 media_image1.png Greyscale The examiner respectfully disagrees. While the claims (and paras. 0042-0043 of the specification) may relate to improvements to “time-series forecasting”, analyzing time-series data and making forecasts based on such data is a mental process. Therefore, any improvement is to the judicial exception, and not to the function of a computer or any other “technology” or “technical field.” See MPEP 2106.04(d)(1). There is no improvement to the technology of the “machine learning forecasting model,” which is recited at a high-level of generality. On page 16 of Applicant’s 11/25/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 101, with respect to Step 2A, Prong 2, Applicant argues: PNG media_image2.png 144 656 media_image2.png Greyscale The examiner respectfully disagrees that paras. 0042 and 0043 describe a “technical problem.” As explained above, time-series forecasting analyses are mental processes. On page 16 of Applicant’s 11/25/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 101, with respect to Step 2A, Prong 2, Applicant argues that the “claimed computer-implemented method addresses these challenges by introducing a systematic process for incorporating metadata-derived relationships and exogenous data into the forecasting workflow” and concludes that the claim “describes a technical improvement over conventional univariate forecasting methods that rely exclusively on static features.” The examiner respectfully disagrees that this is a “technical improvement.” As explained above, time-series forecasting analyses are mental processes. Applicant’s arguments merely improve on the mental processes by adding additional mental processes, and no actual technical improvement is made to the recited “computing device” or “machine learning forecasting model.” On page 18 of Applicant’s 11/25/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 101, with respect to Step 2B, Applicant argues that Federal Circuit precedent supports its argument. PNG media_image3.png 162 648 media_image3.png Greyscale The examiner respectfully disagrees. First, as Applicant notes, the claims at issue in DDR Holdings related to a “technological problem ‘particular to the Internet’” and in Trading Technologies related to GUI technologies, which are actual “technical fields.” Here, as explained above, the examiner respectfully submits that “performing analysis of time series information using time varying features” is not a technical field, but rather, is a mental process. Second, while Applicant asserts that the claims “utilize unconventional techniques”, Applicant has provided no evidence to support its contention that the techniques are “unconventional.” While MPEP 2106.05(d) explains that inclusion of an unconventional element “favors eligibility”, there is no evidence that such elements are either conventional or unconventional, and therefore this factor neither favors nor disfavors a finding of subject matter eligibility. On pages 18-19 of Applicant’s 11/25/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 101, with respect to Step 2B, Applicant argues that Federal Circuit precedent in Bascom supports its argument. PNG media_image4.png 228 652 media_image4.png Greyscale The examiner respectfully disagrees. As set forth in this office action, every single claim has been rejected under 35 U.S.C. 103, so the examiner respectfully disagrees that the claims recite an “inventive concept.” On pages 19-21 of Applicant’s 11/25/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 103 in view of the BLEDSOE and YUAN references, Applicant argues that the claim amendments overcome the present rejections. The examiner agrees that the claim amendments, and in particular the limitations reciting “determining, by the computing device, a lag value based at least in part on the metadata”, “shifting, by the computing device, the second value from the second time step to a third time step based at least in part on the lag value” and “determining, by the computing device, a lagged second value associated with the third time step” are not explicitly taught by either BLEDSOE or YUAN. The previous rejections under 35 U.S.C. 103 are hereby withdrawn. However, Applicant’s amendments necessitated new grounds of rejection in view of the BLEDSOE, YUAN, and SALUNKE references as explained in the detailed rejections below. On pages 21-24 of Applicant’s 11/25/2025 Amendment and remarks, with respect to the rejections under 35 U.S.C. 103 of the remaining claims, Applicant argues that such rejections are overcome for the same reasons argued with respect to claim 1. The examiner agrees that the previous rejections to all remaining claims are overcome, but new grounds of rejection, necessitated by Applicant’s amendments to the independent claims, are set forth herein as explained in the detailed rejections below. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: In para. 0098, line 5 (of the 11/25/2025 marked-up substitute specification), “secure shell (SSH) VCN 1312” does not appear in substitute Fig. 13. In substitute Fig. 13, this item is referred to as “312.” In para. 0098, line 5 (of the 11/25/2025 marked-up substitute specification), “control plane VCN 1360” does not appear in substitute Fig. 13. The examiner believes that this may supposed to refer to “control plane VCN 1316”. Review and correction is respectfully requested. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: In para. 0100, line 4 (of the 11/25/2025 marked-up substitute specification), “compute instance 1244” (used twice), should refer to “compute instance 1344”. In para. 0108, line 9 (of the 11/25/2025 marked-up substitute specification), “public Internet 1318” should read “public Internet 1354” In para. 0148, line 9 (of the 11/25/2025 marked-up substitute specification), “IEEE302.11” should read “IEEE802.11”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-11, 13-18, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Step 1 of the Alice/Mayo framework, Claims 1-4 and 6-7 are directed to a method (a process), Claims 8-11 and 13-14 are directed to a system (a machine), and Claims 15-18 and 20 are directed to a non-transitory computer-readable medium (an article of manufacture), which each fall within one of the four statutory categories of inventions. Regarding Claim 1 Step 2A, prong 1 (Is the claim directed to a law of nature, a natural phenomenon or an abstract idea). Claim 1 recites the following mental processes, that in each case under the broadest reasonable interpretation, covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components (e.g., “computer-implemented”, “computing device” and “machine learning forecasting model”). determining ... a lag value based at least in part on the metadata (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can mentally select a lag value, such as a lag of 1 minute in the time series data, using metadata as a consideration) shifting ... the second value from the second time step to a third time step based at least in part on the lag value (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can mentally shift a value from one time step to another, e.g., by adding a lag value to a second time step to result in a third time step) determining ... a lagged second value associated with the third time step (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can mentally determine a lagged second value based on the third time step) detecting, ... the relationship between the first value associated with the first time step of the time-series data and the lagged second value associated with the second time step of the time-series data based at least in part on the metadata; (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can consider first and lagged second values as recited and determine a relationship between them mentally, e.g., such as a time series of temperature over time, can note the relationship between a time at 2pm on day 1 and 3pm on day 1, e.g., the relationship is that they are both times captured in the afternoon) generating, ..., a time-varying feature from a combination of the first value associated with the first time step of the time-series data and the lagged second value associated with the second time step of the time-series data based at least in part on the relationship detected from the metadata; (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can generate a time-varying feature from the first value, the lagged second value, and the relationship, such as the difference between the lagged second value and the first value, which will vary depending on the amount of time between the first and second values) generating, ..., an input data value for a machine learning forecasting model by applying the exogenous data value to the time-varying feature; and (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can determine an input value for a machine learning forecasting model, by considering exogenous data, for example, multiplying the time-varying feature from above by a ratio dependent on cloud cover) generating, ... a forecasted value for the time-series data based at least in part on the input data value. (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can mentally predict a forecasted value for 4pm on day 1 by using the input data value previously generated) Step 2A, prong 2 (Does the claim recite additional elements that integrate the judicial exception into a practical application?). The judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements (e.g., “computer-implemented”, “computing device” and “machine learning forecasting model”) which are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Regarding the “by the computing device” and “by the computing device implementing the machine learning forecasting model” limitation, such limitations are recited at a high-level of generality and amount to no more than adding the words “apply it” (or an equivalent) with the judicial exception. In particular, the claim only recites the additional elements of generic computing devices and machine learning models. These additional elements are recited at a high-level of generality and amount to no more than mere instructions to apply the exception using generic computer components (generic computing devices and machine learning models). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)). Regarding the “receiving, by a computing device, a first value associated with a first time step of time-series data and a second value associated with a second timestep of the time-series data” limitation, such additional element of a data gathering step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process (see MPEP 2106.05(g)). Regarding the “receiving, by the computing device, metadata that describes a relationship between the first value associated with the first time step of the time-series data and the second value associated with the second time step of the time-series data,” limitation, such additional element of a data gathering step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process (see MPEP 2106.05(g)). Regarding the “the metadata being generated based at least in part on the time-series data” limitation, this limitation merely describes the data environment, and therefore such limitation amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (a particular data environment of time-series data and related metadata). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not integrate a judicial exception into a practical application. Regarding the “receiving, by the computing device, an exogenous data value, the exogenous data value being generated distinctly from the time-series data” limitation, such additional element of a data gathering step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process (see MPEP 2106.05(g)). Step 2B (Does the claim recite additional elements that amount to significantly more than the judicial exception?) In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements (e.g., “computer-implemented”, “computing device” and “machine learning forecasting model”) are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Regarding the “by the computing device” and “by the computing device implementing the machine learning forecasting model” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)). Regarding the “receiving, by a computing device, a first value associated with a first time step of time-series data and a second value associated with a second timestep of the time-series data” limitation, as discussed above, the additional element of a data gathering step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). Regarding the “receiving, by the computing device, metadata that describes a relationship between the first value associated with the first time step of the time-series data and the second value associated with the second time step of the time-series data,” limitation, as discussed above, the additional element of a data gathering step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). Regarding the “the metadata being generated based at least in part on the time-series data” limitation, such limitation amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use as explained above, which does not amount to significantly more than the judicial exception. MPEP 2106.05(h). Regarding the “receiving, by the computing device, an exogenous data value, the exogenous data value being generated distinctly from the time-series data” limitation, as discussed above, the additional element of a data gathering step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. pre-solution activity of gathering data for use in the claimed process. The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). Regarding Claim 2 Step 2A, Prong 2 Regarding the “further comprising transmitting the input data value to the machine learning forecasting model” limitation, such additional element of a data transmitting step is recited at a high level of generality and amounts to extra-solution activity of transmitting data, i.e. post-solution activity of transmitting data from the claimed process (see MPEP 2106.05(g)). Step 2B Regarding the “further comprising transmitting the input data value to the machine learning forecasting model” limitation, as discussed above, the additional element of a data transmitting step is recited at a high level of generality and amounts to extra-solution activity of receiving data, i.e. post-solution activity of transmitting data from the claimed process. The courts have found limitations directed to transmitting information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). Regarding Claim 3 Step 2A, Prong 2 Regarding the “further comprising outputting a local explanation, a global explanation, a fitted series, and a rolling-origin cross-validation error” limitation, such limitation amounts to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)). Step 2B Regarding the “further comprising outputting a local explanation, a global explanation, a fitted series, and a rolling-origin cross-validation error” limitation, this limitation amounts to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)). The courts have similarly found limitations directed to displaying a result, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "presenting offers and gathering statistics.", “determining an estimated outcome and setting a price”) Regarding Claim 4 Step 2A, Prong 1 further comprising adding together the first value associated with the first time step of the time-series data and the lagged second value associated with the second time step of the time-series data. (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can mentally add together the first and lagged second values; the examiner further notes that this addition step is a mathematical calculation which is another type of abstract idea) Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception. Regarding Claim 6 Step 2A, Prong 1 implements a gradient boosting technique to generate the forecasted value for the time-series data. (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can mentally (or with paper and pencil) perform a gradient boosting technique, which is a mathematical technique. Step 2A, Prong 2 Regarding the “wherein the machine learning forecasting model” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception. In particular, the claim only recites the additional element of generic machine learning. This additional element is recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (generic machine learning). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)). Step 2B Regarding the “wherein the machine learning forecasting model” limitation, such limitation is recited at a high-level of generality and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, because the limitation merely provides instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not add significantly more than the judicial exception. (See MPEP 2106.05(f)). Regarding Claim 7 Step 2A, Prong 1 wherein the time-varying feature comprises a multi-time step rolling mean value. (under the broadest reasonable interpretation, this limitation can be performed mentally (or with physical aids such as a pencil and paper), for example, a human can mentally calculate a rolling mean value (such as a simple moving average) using multi-time steps) Regarding Step 2A, Prong 2, the claim does not include any additional elements that integrate the judicial exception into a practical application and regarding Step 2B, there are no additional elements recited that amount to significantly more than the judicial exception. Regarding Claim 8 Step 2A, Prong 1 Claim 8 recites a computing system that corresponds to the method of claim 1, and therefore the analysis under Step 2A, Prong 1 with respect to claim 1 also applies to this claim 8. While claim 8 recites additional generic computing components (“computing system”, “processor”, “computer-readable medium”, “instructions”, and “machine learning forecasting model”), such additional generic computing components do not change the analysis under Step 2A, Prong 1. The examiner notes that claim 8 slightly differs from claim 1, in that the “detect the relationship between the first value associated with the first time step of the time-series data and the second value associated with the second time step of the time-series data based at least in part on the metadata” is based on the “second value” and not the “lagged second value”, but that subtle distinction does not change the mental process analysis. Step 2A, Prong 2 Claim 8 recites a computing system that corresponds to the method of claim 1, and therefore the analysis under Step 2A, Prong 2 with respect to claim 1 also applies to this claim 8. While claim 8 recites additional generic computing components (“computing system”, “processor”, “computer-readable medium”, “instructions”, and “machine learning forecasting model”), such additional generic computing components do not change the analysis under Step 2A, Prong 2. Step 2B Claim 8 recites a computing system that corresponds to the method of claim 1, and therefore the analysis under Step 2B with respect to claim 1 also applies to this claim 8. While claim 8 recites additional generic computing components (“computing system”, “processor”, “computer-readable medium”, “instructions”, and “machine learning forecasting model”), such additional generic computing components do not change the analysis under Step 2B. Claims 9-11 and 13-14 depend from claim 8 and correspond to the methods of claims 2-4 and 6-7, respectively, and are therefore rejected for the same reasons explained above with respect to claims 8 and claims 2-4 and 6-7, respectively. Regarding Claim 15 Step 2A, Prong 1 Claim 15 recites a non-transitory computer-readable medium that corresponds to the method of claim 1, and therefore the analysis under Step 2A, Prong 1 with respect to claim 1 also applies to this claim 15. While claim 15 recites additional generic computing components (“processor”, “non-transitory computer-readable medium”, “instructions”, and “machine learning forecasting model”), such additional generic computing components do not change the analysis under Step 2A, Prong 1. Step 2A, Prong 2 Claim 15 recites a non-transitory computer-readable medium that corresponds to the method of claim 1, and therefore the analysis under Step 2A, Prong 2 with respect to claim 1 also applies to this claim 8. While claim 8 recites additional generic computing components (“processor”, “non-transitory computer-readable medium”, “instructions”, and “machine learning forecasting model”), such additional generic computing components do not change the analysis under Step 2A, Prong 2. Step 2B Claim 15 recites a non-transitory computer-readable medium that corresponds to the method of claim 1, and therefore the analysis under Step 2B with respect to claim 1 also applies to this claim 15. While claim 15 recites additional generic computing components (“processor”, “non-transitory computer-readable medium”, “instructions”, and “machine learning forecasting model”), such additional generic computing components do not change the analysis under Step 2B. Claim 16 depends from claim 15 and claims a non-transitory computer-readable medium that corresponds to the method of claim 2, and is therefore rejected for the same reasons explained above with respect to claims 2 and 15. Regarding Claim 17 Step 2A, Prong 2 Regarding the “wherein the operations further comprise outputting a forecasted value based at least in part on the input data, the forecasted value being forecasted from the time-series data” limitation, such limitation amounts to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)). Moreover, the data inputs used to generate the forecasted values merely describes a particular data environment, and therefore such limitation amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not integrate a judicial exception into a practical application. Step 2B Regarding the “wherein the operations further comprise outputting a forecasted value based at least in part on the input data, the forecasted value being forecasted from the time-series data” limitation, this limitation amounts to extra solution activity because it is a mere nominal or tangential addition to the claim, amounting to mere data output (see MPEP 2106.05(g)). The courts have similarly found limitations directed to displaying a result, recited at a high level of generality, to be well-understood, routine, and conventional. See (MPEP 2106.05(d)(II), "presenting offers and gathering statistics.", “determining an estimated outcome and setting a price”). Moreover, such limitation amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use as explained above, which does not amount to significantly more than the judicial exception. MPEP 2106.05(h). Claim 18 depends from claim 15 and claims a non-transitory computer-readable medium that corresponds to the method of claim 4, and is therefore rejected for the same reasons explained above with respect to claims 4 and 15. Claim 20 depends from claim 15 and claims a non-transitory computer-readable medium that corresponds to the method of claim 6, and is therefore rejected for the same reasons explained above with respect to claims 6 and 15. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4, 8-9, 11, 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over US 20180300737 A1, hereinafter referenced as BLEDSOE, in view of US 20170031867 A1, hereinafter referenced as YUAN, and further in view of US 20170329660 A1, hereinafter referenced as SALUNKE. Regarding Claim 1 BLEDSOE teaches: A computer-implemented method, the method comprising: (BLEDSOE, para. 0117: “It is intended that the systems and methods described herein can be performed by software (stored in memory and/or executed on hardware), hardware, or a combination thereof. ... Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter.”) receiving, by a computing device, a first value associated with a first time step of time-series data and a second value associated with a second timestep of the time-series data; (BLEDSOE, para. 0020: “Time series are sequences of data points representing samples or observations often collected at discrete and equally spaced time intervals.”; BLEDSOE, para. 0025: “In some implementations, network 103 couples TSF server 101 to multiple time series data sources including, for example, data repositories 107, application server 111A, web server 111B, commerce server 111C, media server 111D, and other suitable data sources not shown in FIG. 1. Alternatively or in addition, TSF server 101 can be directly coupled to time series data sources, as opposed to, via network 103. In some other implementations, TSF server 101 can include a time series monitoring system (not shown in FIG. 1) to capture time series data points. In such a case, TSF server 101 can alternatively receive and integrate data associated with the time series data points from server 111A, 111B, 111C, 111D, and/or data repositories 107 in for example a system memory or TSF repository 105.”; BLEDSOE, para. 0026: “TSF server 101 collects and/or receives datasets, via network 103. In some instances, time series have descriptive values or data points associated with a feature of an entity. In some other instances, each time series observation or sample can include a vector of values; these types of time series are known as multivariate time series. In some cases, time series are evenly spaced over time according to a constant scale or spaced time interval e.g., year, month, day, hour, second, and so forth. Examples of evenly spaced time series include monthly indices of industrial production of a given country, annual per capita gross domestic product for a group of countries, daily sales of a given product, and other suitable types of evenly spaced time series.”; Examiner’s Note (EN): Time Series Forecasting (TSF) Server 101 (see def. at para. 0024), which is a computing system, receives time series data from repositories 107 and 111A-D, where each time series has multiple datapoints that are evenly spaced in time, such that the first datapoint at the first point in time corresponds to the recited “first value associated with a first time step of time-series data” and a later datapoint corresponds to the recited “second value”) receiving, by the computing device, metadata that describes a relationship between the first value associated with the first time step of the time-series data and the second value associated with the second time step of the time-series data, the metadata being generated based at least in part on the time-series data; (BLEDSOE, para. 0058: “ As shown in FIG. 3, during a data ingestion phase, TSF server 101 retrieves or captures time series data and/or data associated with a time series from private data repositories 301.”; BLEDSOE, para. 0059: “ during an extraction phase, TSF server 101 can retrieve or capture enrichment data from semi-public data sources 303. Semi-public data sources 303 can be implemented in, for example, servers 111A-111D discussed with reference to FIG. 1 or other suitable platforms. In some instances, TSF server 101 can retrieve enrichment data or other suitable type of data via a public API provided by semi-public data sources 303. ... In some instances, enrichment data can convey data associated with a time series, for example, social sentiment, promotional or advertising data, economic conditions, climatic conditions, demographic data, and other suitable enrichment data. For instance, time series data points (e.g., number of sales per day) can be enriched with descriptive data of climatic conditions of a day (e.g., rainy, sunny, or other suitable climatic descriptive values). In some instances, enrichment data is integrated into forecasting models to achieve greater forecasting accuracy.”; Examiner’s Note (EN): TSF server 101 receives “enrichment data” that is “associated with a time series” (corresponding to recited “metadata”), where such metadata is generated by the source of such enrichment data and then received by TSF server 101, where the enrichment data shows relationships between datapoints, for example, time series datapoints related to sales can be enriched with climate conditions, e.g., a relationship between weather and number of sales) detecting, by the computing device, the relationship between the first value associated with the first time step of the time-series data and the (BLEDSOE, para. 0077: “At 503, TSF server 101 determines time series characteristics based on monitored data points of a time series and/or datasets with data associated with a time series. Examples of time series characteristics determined at 503 include occurrences of dead data periods, number of observations or samples available for training and/or testing (i.e., sample size), constant data, exogenous variables associated with a time series, sparseness of time series data points, standard deviation of time series data points, shape distribution of a time series, and other suitable time series characteristics.”; (EN): the TSF server 101 detects time series characteristics (corresponding to recited “relationship between the first value ... and the second value”) based on the “datasets with data associated with a time series” (corresponding to recited “metadata” of the time-series data)) generating, by the computing device, a time-varying feature from a combination of the first value associated with the first time step of the time-series data and the (BLEDSOE, para. 0062: “During feature engineering 311, TSF server 101, determines what features or data can be useful for the forecasting of time series. For instance, climatic condition features can be used to more accurately forecast the number of sales expected to be made at a given store. Accordingly, in some implementations, TSF server 101 can analyze past observations or samples of sales time series along with other associated data to categorize exogenous or covariant features, as strongly relevant, weakly relevant, or irrelevant, and consequently integrate relevant and/or weakly relevant features into model optimization 313.”; (EN): the examiner notes that the broadest reasonable interpretation of “time-varying features” includes “features that use relationships described in the metadata 204 to describe interdependencies between the values of the time-series data 202”, and BLEDSOE teaches feature engineering to determine covariant features with respect to the time series data, corresponding to recited “time-varying features”) receiving, by the computing device, an exogenous data value, the exogenous data value being generated distinctly from the time-series data; (BLEDSOE, para. 0032: “Moreover, TSF repository 105 can store data computed during and in between election contests and data computed from received samples or observations of a time series. Such data includes the number of features (e.g., exogenous features) associated with a time series”; BLEDSOE, para. 0059: “TSF server 101 trains entrant forecasting models to produce forecasted data points of a time series model taking into account exogenous variables extracted from semi-public data sources 303”; Examiner’s Note: the TSF server 101 receives exogenous variables from the data sources 303, where such data sources are external to the TSF server, and therefore such variables are determined independently (distinctly) from the time series data collected by TSF server 101 from repositories 107 and 111A-D) generating, by the computing device implementing the machine learning forecasting model, a forecasted value for the time-series data ... (BLEDSOE, para. 0097: “At 607, TSF server 101 trains an entrant forecasting model using data points of the time series included in the first set (i.e., training set). Some machine learning techniques that can be used during the training process include sliding-window methods, recurrent sliding windows, hidden Markov models, maximum entropy Markov models, input-output Markov models, conditional random fields, graph transformer networks, and other supervised machine learning techniques. TSF server 101 executes the trained entrant forecasting model to produce a set of forecasted data points of the time series.”) However, BLEDSOE fails to explicitly teach: determining, by the computing device, a lag value based at least in part on the metadata; shifting, by the computing device, the second value from the second time step to a third time step based at least in part on the lag value; determining, by the computing device, a lagged second value associated with the third time step; lagged second value generating, by the computing device, an input data value for a machine learning forecasting model by applying the exogenous data value to the time-varying feature; and based at least in part on the input data value However, in a related field of endeavor (forecasting power output at a later point in time, paras. 0004, 0006), YUAN teaches: generating, by the computing device, an input data value for a machine learning forecasting model by applying the exogenous data value to the time-varying feature; (YUAN, para. 0021: “For example, a time delayed neural network (TDNN) is one of the NNs that may be applied to solar power forecasting.” YUAN, para. 0041: “Each of the K features may correspond or be based on a different function gwk (t+h), that was fitted using data from windows of differing sizes. The step of forecasting the exogenous variables may occur before, after, or at the same time as the step of computing the K features.” YUAN, para. 0042: “For example, the K features (gwk (t+h)) may be linearly combined with the exogenous vector (variables) using Equation 9 to predict the future power output Õt+h of the PV plant.”; Examiner’s Note: YUAN teaches K features (based on a time-varying function) and combining such features with exogeneous variable vectors, corresponding to recited “applying the exogenous data value to the time-varying feature” and further discloses forecasting using a neural network; the BLEDSOE-YUAN combination now modifies the machine learning forecasting TSF server 101 of BLEDSOE to use a combination of the features engineered by BLEDSOE and the exogeneous data of BLEDSOE, where such combination for input to a machine learning model is taught by YUAN) generating, by the computing device implementing the machine learning forecasting model, a forecasted value for the time-series data based at least in part on the input data value (YUAN, para. 0021: “For example, a time delayed neural network (TDNN) is one of the NNs that may be applied to solar power forecasting.” YUAN, para. 0041: “Each of the K features may correspond or be based on a different function gwk (t+h), that was fitted using data from windows of differing sizes. The step of forecasting the exogenous variables may occur before, after, or at the same time as the step of computing the K features.” YUAN, para. 0042: “For example, the K features (gwk (t+h)) may be linearly combined with the exogenous vector (variables) using Equation 9 to predict the future power output Õt+h of the PV plant.”; Examiner’s Note: the BLEDSOE-YUAN combination now modifies the machine learning forecasting TSF server 101 of BLEDSOE to use a combination of the features engineered by BLEDSOE and the exogeneous data of BLEDSOE, where such combination for input to a machine learning model is taught by YUAN, in order to forecast a value as in BLEDSOE) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN as explained above. As disclosed by YUAN, one of ordinary skill would have been motivated to do so because in the power forecasting setting, YUAN discloses the importance of using exogenous variables together with time series data. (para. 0015). One of ordinary skill would further understand the benefit of performing feature engineering (as in BLEDSOE) to combine specific exogenous features with time series features in order to take into account additional variables for use in forecasting. However, BLEDSOE and YUAN fail to explicitly teach: determining, by the computing device, a lag value based at least in part on the metadata; shifting, by the computing device, the second value from the second time step to a third time step based at least in part on the lag value; determining, by the computing device, a lagged second value associated with the third time step; lagged second value However, in a related field of endeavor (monitoring and analyzing time series data, see para. 0057), SALUNKE teaches: determining, by the computing device, a lag value based at least in part on the metadata; (SALUNKE, para. 0095: “FIG. 5 illustrates an example set of operations for training a correlation prediction model in accordance with one or more embodiments. At 510, training logic 110 aligns the base and related metrics. Training logic 110 may use a nearest neighbor technique to align the different metrics, although any time-series alignment algorithm may be used at this step. According to the nearest neighbor technique, training logic 110 examines timestamps that are associated with the sample metric values. Training logic 110 then aligns sample values that are nearest in time to each other. As an example scenario, a series of sample values tracking a first metric may be received at 9:00 a.m., 9:15 a.m., and 9:30 a.m. on Monday. A second series of sample values tracking a second metric may be received at 9:02 a.m., 9:16 a.m., and 9:33 a.m. Training logic 110 may align the 9:00 a.m. sample value from the first metric with the 9:02 a.m. sample value from the second metric, the 9:15 a.m. value with the 9:16 a.m. sample value, and the 9:30 a.m. value with the 9:33 a.m. value. In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: SALUNKE discloses introducing a “lag” to a metric when training a prediction model, including timestamp information; the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding introducing a lag into data used to train a prediction model) shifting, by the computing device, the second value from the second time step to a third time step based at least in part on the lag value; (SALUNKE, para. 0095: “ In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: SALUNKE discloses introducing a “lag” to a metric when training a prediction model by shifting a metric by a threshold period of time (corresponding to recited “lag value”); the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding shifting a value from a time step to a different time step by a “threshold period of time” as in SALUNKE) determining, by the computing device, a lagged second value associated with the third time step; (SALUNKE, para. 0095: “ In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: SALUNKE discloses introducing a “lag” to a metric when training a prediction model by shifting a metric by a threshold period of time (corresponding to recited “lag value”); the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding shifting a value from a time step to a different time step by a “threshold period of time” as in SALUNKE) detecting, by the computing device, the relationship between the first value associated with the first time step of the time-series data and the lagged second value associated with the second time step of the time-series data based at least in part on the metadata (SALUNKE, para. 0095: “ In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: As explained above, SALUNKE teaches the concept of generating a lagged second value, and BLEDSOE teaches that the TSF server 101 detects time series characteristics (corresponding to recited “relationship between the first value ... and the lagged second value”) based on the “datasets with data associated with a time series” (corresponding to recited “metadata” of the time-series data), as explained by BLEDSOE at para. 0077; the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding creating a second lagged value, and applies the teachings of BLEDSOE to such second lagged value) generating, by the computing device, a time-varying feature from a combination of the first value associated with the first time step of the time-series data and the lagged second value associated with the second time step of the time-series data based at least in part on the relationship detected from the metadata (SALUNKE, para. 0095: “ In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: As explained above, SALUNKE teaches the concept of generating a lagged second value, and BLEDSOE teaches feature engineering to determine covariant features with respect to the time series data, corresponding to recited “time-varying features” (see BLEDSOE at para. 0062); the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding creating a second lagged value, and applies the feature engineering teachings of BLEDSOE to use such second lagged value) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN and SALUNKE as explained above. As disclosed by SALUNKE, one of ordinary skill would have been motivated to do so because introducing a phase shift, such as a “lag” to a metric used in training data, can help to “determine whether a delayed correlation is present” in the training data. (para. 0095). Regarding Claim 2 BLEDSOE, YUAN, and SALUNKE disclose the method of claim 1. However, BLEDSOE fails to explicitly teach: further comprising transmitting the input data value to the machine learning forecasting model. However, in a related field of endeavor (forecasting power output at a later point in time, paras. 0004, 0006), YUAN teaches: further comprising transmitting the input data value to the machine learning forecasting model. (YUAN, para. 0014: “The meter may periodically transmit these measures across a network 120 to the prediction system 130.”; (EN): the BLEDSOE-YUAN combination now modifies the machine learning forecasting TSF server 101 of BLEDSOE such that the data is transmitted to such forecasting server as in YUAN). Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN and SALUNKE as explained above. One of ordinary skill would understand the benefit of transmitting data from various sources directly the TSF server so that such TSF server can use such data for feature engineering. Regarding Claim 4 BLEDSOE, YUAN, and SALUNKE disclose the method of claim 1. BLEDSOE further teaches: further comprising adding together the first value associated with the first time step of the time-series data and the (BLEDSOE, para. 0037, Table 2, identifying “Autoregressive Moving Average (ARMA)”, “Moving Average (MA)”, and “model Autoregressive (AR) model Autoregressive Moving Average” functions; (EN): moving averages require adding up the datapoint values and dividing over a period of time) However, BLEDSOE and YUAN fail to explicitly teach: lagged second value However, in a related field of endeavor (monitoring and analyzing time series data, see para. 0057), SALUNKE teaches: further comprising adding together the first value associated with the first time step of the time-series data and the lagged second value associated with the second time step of the time-series data (SALUNKE, para. 0095: “ In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: As explained above, SALUNKE teaches the concept of generating a lagged second value, and BLEDSOE teaches moving averages that require adding up the datapoint values and dividing over a period of time (see BLEDSOE at para. 0037); the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding creating a second lagged value, and applies the moving average teachings of BLEDSOE to use such second lagged value) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN and SALUNKE as explained above. As disclosed by SALUNKE, one of ordinary skill would have been motivated to do so because introducing a phase shift, such as a “lag” to a metric used in training data, can help to “determine whether a delayed correlation is present” in the training data. (para. 0095). Regarding Claim 8 BLEDSOE teaches: A computing system, comprising: a processor; and a computer-readable medium including instructions that, when executed by the processor, cause the processor to: (BLEDSOE, para. 0117: “Hardware modules may include, for example, a general-purpose processor”; BLEDSOE, para. 0118: “Some embodiments described herein relate to devices with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium or memory) having instructions or computer code thereon for performing various computer-implemented operations.”) receive a first value associated with a first time step of time-series data and a second value associated with a second timestep of the time-series data; (BLEDSOE, para. 0020: “Time series are sequences of data points representing samples or observations often collected at discrete and equally spaced time intervals.”; BLEDSOE, para. 0025: “In some implementations, network 103 couples TSF server 101 to multiple time series data sources including, for example, data repositories 107, application server 111A, web server 111B, commerce server 111C, media server 111D, and other suitable data sources not shown in FIG. 1. Alternatively or in addition, TSF server 101 can be directly coupled to time series data sources, as opposed to, via network 103. In some other implementations, TSF server 101 can include a time series monitoring system (not shown in FIG. 1) to capture time series data points. In such a case, TSF server 101 can alternatively receive and integrate data associated with the time series data points from server 111A, 111B, 111C, 111D, and/or data repositories 107 in for example a system memory or TSF repository 105.”; BLEDSOE, para. 0026: “TSF server 101 collects and/or receives datasets, via network 103. In some instances, time series have descriptive values or data points associated with a feature of an entity. In some other instances, each time series observation or sample can include a vector of values; these types of time series are known as multivariate time series. In some cases, time series are evenly spaced over time according to a constant scale or spaced time interval e.g., year, month, day, hour, second, and so forth. Examples of evenly spaced time series include monthly indices of industrial production of a given country, annual per capita gross domestic product for a group of countries, daily sales of a given product, and other suitable types of evenly spaced time series.”; Examiner’s Note (EN): Time Series Forecasting (TSF) Server 101 (see def. at para. 0024), which is a computing system, receives time series data from repositories 107 and 111A-D, where each time series has multiple datapoints that are evenly spaced in time, such that the first datapoint at the first point in time corresponds to the recited “first value associated with a first time step of time-series data” and a later datapoint corresponds to the recited “second value”) receive metadata that describes a relationship between the first value associated with the first time step of the time-series data and the second value associated with the second time step of the time-series data, the metadata being generated based at least in part on the time-series data; (BLEDSOE, para. 0058: “ As shown in FIG. 3, during a data ingestion phase, TSF server 101 retrieves or captures time series data and/or data associated with a time series from private data repositories 301.”; BLEDSOE, para. 0059: “ during an extraction phase, TSF server 101 can retrieve or capture enrichment data from semi-public data sources 303. Semi-public data sources 303 can be implemented in, for example, servers 111A-111D discussed with reference to FIG. 1 or other suitable platforms. In some instances, TSF server 101 can retrieve enrichment data or other suitable type of data via a public API provided by semi-public data sources 303. ... In some instances, enrichment data can convey data associated with a time series, for example, social sentiment, promotional or advertising data, economic conditions, climatic conditions, demographic data, and other suitable enrichment data. For instance, time series data points (e.g., number of sales per day) can be enriched with descriptive data of climatic conditions of a day (e.g., rainy, sunny, or other suitable climatic descriptive values). In some instances, enrichment data is integrated into forecasting models to achieve greater forecasting accuracy.”; Examiner’s Note (EN): TSF server 101 receives “enrichment data” that is “associated with a time series” (corresponding to recited “metadata”), where such metadata is generated by the source of such enrichment data and then received by TSF server 101, where the enrichment data shows relationships between datapoints, for example, time series datapoints related to sales can be enriched with climate conditions, e.g., a relationship between weather and number of sales) detect the relationship between the first value associated with the first time step of the time-series data and the second value associated with the second time step of the time-series data based at least in part on the metadata; (BLEDSOE, para. 0077: “At 503, TSF server 101 determines time series characteristics based on monitored data points of a time series and/or datasets with data associated with a time series. Examples of time series characteristics determined at 503 include occurrences of dead data periods, number of observations or samples available for training and/or testing (i.e., sample size), constant data, exogenous variables associated with a time series, sparseness of time series data points, standard deviation of time series data points, shape distribution of a time series, and other suitable time series characteristics.”; (EN): the TSF server 101 detects time series characteristics (corresponding to recited “relationship between the first value ... and the second value”) based on the “datasets with data associated with a time series” (corresponding to recited “metadata” of the time-series data)) generate a time-varying feature from a combination of the first value associated with the first time step of the time-series data and the (BLEDSOE, para. 0062: “During feature engineering 311, TSF server 101, determines what features or data can be useful for the forecasting of time series. For instance, climatic condition features can be used to more accurately forecast the number of sales expected to be made at a given store. Accordingly, in some implementations, TSF server 101 can analyze past observations or samples of sales time series along with other associated data to categorize exogenous or covariant features, as strongly relevant, weakly relevant, or irrelevant, and consequently integrate relevant and/or weakly relevant features into model optimization 313.”; (EN): the examiner notes that the broadest reasonable interpretation of “time-varying features” includes “features that use relationships described in the metadata 204 to describe interdependencies between the values of the time-series data 202”, and BLEDSOE teaches feature engineering to determine covariant features with respect to the time series data, corresponding to recited “time-varying features”) receive an exogenous data value, the exogenous data value being generated distinctly from the time-series data; (BLEDSOE, para. 0032: “Moreover, TSF repository 105 can store data computed during and in between election contests and data computed from received samples or observations of a time series. Such data includes the number of features (e.g., exogenous features) associated with a time series”; BLEDSOE, para. 0059: “TSF server 101 trains entrant forecasting models to produce forecasted data points of a time series model taking into account exogenous variables extracted from semi-public data sources 303”; Examiner’s Note: the TSF server 101 receives exogenous variables from the data sources 303, where such data sources are external to the TSF server, and therefore such variables are determined independently (distinctly) from the time series data collected by TSF server 101 from repositories 107 and 111A-D) generate, by implementing the machine learning forecasting model, a forecasted value for the time-series data... (BLEDSOE, para. 0097: “At 607, TSF server 101 trains an entrant forecasting model using data points of the time series included in the first set (i.e., training set). Some machine learning techniques that can be used during the training process include sliding-window methods, recurrent sliding windows, hidden Markov models, maximum entropy Markov models, input-output Markov models, conditional random fields, graph transformer networks, and other supervised machine learning techniques. TSF server 101 executes the trained entrant forecasting model to produce a set of forecasted data points of the time series.”) However, BLEDSOE fails to explicitly teach: determine a lag value based at least in part on the metadata; shift the second value from the second time step to a third time step based at least in part on the lag value; determine a lagged second value associated with the third time step; lagged second value generate an input data value for a machine learning forecasting model by applying the exogenous data value to the time-varying feature; and based at least in part on the input data value However, in a related field of endeavor (forecasting power output at a later point in time, paras. 0004, 0006), YUAN teaches: generate an input data value for a machine learning forecasting model by applying the exogenous data value to the time-varying feature; and (YUAN, para. 0021: “For example, a time delayed neural network (TDNN) is one of the NNs that may be applied to solar power forecasting.” YUAN, para. 0041: “Each of the K features may correspond or be based on a different function gwk (t+h), that was fitted using data from windows of differing sizes. The step of forecasting the exogenous variables may occur before, after, or at the same time as the step of computing the K features.” YUAN, para. 0042: “For example, the K features (gwk (t+h)) may be linearly combined with the exogenous vector (variables) using Equation 9 to predict the future power output Õt+h of the PV plant.”; Examiner’s Note: YUAN teaches K features (based on a time-varying function) and combining such features with exogeneous variable vectors, corresponding to recited “applying the exogenous data value to the time-varying feature” and further discloses forecasting using a neural network; the BLEDSOE-YUAN combination now modifies the machine learning forecasting TSF server 101 of BLEDSOE to use a combination of the features engineered by BLEDSOE and the exogeneous data of BLEDSOE, where such combination for input to a machine learning model is taught by YUAN) generate, by implementing the machine learning forecasting model, a forecasted value for the time-series data based at least in part on the input data value. (YUAN, para. 0021: “For example, a time delayed neural network (TDNN) is one of the NNs that may be applied to solar power forecasting.” YUAN, para. 0041: “Each of the K features may correspond or be based on a different function gwk (t+h), that was fitted using data from windows of differing sizes. The step of forecasting the exogenous variables may occur before, after, or at the same time as the step of computing the K features.” YUAN, para. 0042: “For example, the K features (gwk (t+h)) may be linearly combined with the exogenous vector (variables) using Equation 9 to predict the future power output Õt+h of the PV plant.”; Examiner’s Note: the BLEDSOE-YUAN combination now modifies the machine learning forecasting TSF server 101 of BLEDSOE to use a combination of the features engineered by BLEDSOE and the exogeneous data of BLEDSOE, where such combination for input to a machine learning model is taught by YUAN, in order to forecast a value as in BLEDSOE) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN as explained above. As disclosed by YUAN, one of ordinary skill would have been motivated to do so because in the power forecasting setting, YUAN discloses the importance of using exogenous variables together with time series data. (para. 0015). One of ordinary skill would further understand the benefit of performing feature engineering (as in BLEDSOE) to combine specific exogenous features with time series features in order to take into account additional variables for use in forecasting. However, BLEDSOE and YUAN fail to explicitly teach: determine a lag value based at least in part on the metadata; shift the second value from the second time step to a third time step based at least in part on the lag value; determine a lagged second value associated with the third time step; lagged second value However, in a related field of endeavor (monitoring and analyzing time series data, see para. 0057), SALUNKE teaches: determine a lag value based at least in part on the metadata; (SALUNKE, para. 0095: “FIG. 5 illustrates an example set of operations for training a correlation prediction model in accordance with one or more embodiments. At 510, training logic 110 aligns the base and related metrics. Training logic 110 may use a nearest neighbor technique to align the different metrics, although any time-series alignment algorithm may be used at this step. According to the nearest neighbor technique, training logic 110 examines timestamps that are associated with the sample metric values. Training logic 110 then aligns sample values that are nearest in time to each other. As an example scenario, a series of sample values tracking a first metric may be received at 9:00 a.m., 9:15 a.m., and 9:30 a.m. on Monday. A second series of sample values tracking a second metric may be received at 9:02 a.m., 9:16 a.m., and 9:33 a.m. Training logic 110 may align the 9:00 a.m. sample value from the first metric with the 9:02 a.m. sample value from the second metric, the 9:15 a.m. value with the 9:16 a.m. sample value, and the 9:30 a.m. value with the 9:33 a.m. value. In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: SALUNKE discloses introducing a “lag” to a metric when training a prediction model, including timestamp information; the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding introducing a lag into data used to train a prediction model) shift the second value from the second time step to a third time step based at least in part on the lag value; (SALUNKE, para. 0095: “ In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: SALUNKE discloses introducing a “lag” to a metric when training a prediction model by shifting a metric by a threshold period of time (corresponding to recited “lag value”); the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding shifting a value from a time step to a different time step by a “threshold period of time” as in SALUNKE) determine a lagged second value associated with the third time step; (SALUNKE, para. 0095: “ In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: SALUNKE discloses introducing a “lag” to a metric when training a prediction model by shifting a metric by a threshold period of time (corresponding to recited “lag value”); the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding shifting a value from a time step to a different time step by a “threshold period of time” as in SALUNKE) generate a time-varying feature from a combination of the first value associated with the first time step of the time-series data and the lagged second value associated with the second time step of the time-series data based at least in part on the relationship detected from the metadata; (SALUNKE, para. 0095: “ In addition or as an alternative to aligning the nearest neighbor, training logic 110 may introduce a phase shift such as a “lag” to one of the metrics. A phase shift may be useful to determine whether a delayed correlation exists between two metrics. For instance, a change in one metric value may be tied to a change in the value in another metric after a threshold period of time has elapsed. Accordingly, training logic 110 may shift second metric by the threshold period of time to determine whether a delayed correlation is present.”; Examiner’s Note: As explained above, SALUNKE teaches the concept of generating a lagged second value, and BLEDSOE teaches feature engineering to determine covariant features with respect to the time series data, corresponding to recited “time-varying features” (see BLEDSOE at para. 0062); the BLEDSOE-YUAN-SALUNKE combination now modifies the forecasting system of BLEDSOE, which includes lag analysis (see paras. 0039-0044 of BLEDSOE), with the teachings of SALUNKE regarding creating a second lagged value, and applies the feature engineering teachings of BLEDSOE to use such second lagged value) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN and SALUNKE as explained above. As disclosed by SALUNKE, one of ordinary skill would have been motivated to do so because introducing a phase shift, such as a “lag” to a metric used in training data, can help to “determine whether a delayed correlation is present” in the training data. (para. 0095). Claim 9 depends from claim 8 and claims a computing system that corresponds to the method of claim 2, and is therefore rejected for the same reasons explained above with respect to claims 2 and 8. Claim 11 depends from claim 8 and claims a computing system that corresponds to the method of claim 4, and is therefore rejected for the same reasons explained above with respect to claims 4 and 8. Regarding Claim 15 BLEDSOE teaches: A non-transitory computer-readable medium having stored thereon a sequence of instructions which, when executed by a processor, causes the processor to perform operations comprising: (BLEDSOE, para. 0117: “Hardware modules may include, for example, a general-purpose processor”; BLEDSOE, para. 0118: “Some embodiments described herein relate to devices with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium or memory) having instructions or computer code thereon for performing various computer-implemented operations.”) The remaining limitations in claim 15 correspond to the limitations of claim 1 and therefore this claim 15 is rejected for the same reasons explained above under 35 U.S.C. 103 with respect to the BLEDSOE and YUAN and SALUNKE references. Claim 16 depends from claim 15 and claims a non-transitory computer-readable medium that corresponds to the method of claim 2, and is therefore rejected for the same reasons explained above with respect to claims 2 and 15. Regarding Claim 17 BLEDSOE and YUAN disclose the method of claim 15. BLEDSOE further teaches: wherein the operations further comprise outputting a forecasted value based at least in part on the input data, the forecasted value being forecasted from the time-series data. (BLEDSOE, para. 0097: “At 607, TSF server 101 trains an entrant forecasting model using data points of the time series included in the first set (i.e., training set). Some machine learning techniques that can be used during the training process include sliding-window methods, recurrent sliding windows, hidden Markov models, maximum entropy Markov models, input-output Markov models, conditional random fields, graph transformer networks, and other supervised machine learning techniques. TSF server 101 executes the trained entrant forecasting model to produce a set of forecasted data points of the time series.”) Claim 18 depends from claim 15 and claims a non-transitory computer-readable medium that corresponds to the method of claim 4, and is therefore rejected for the same reasons explained above with respect to claims 4 and 15. Claim 3 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over BLEDSOE in view of YUAN and SALUNKE and further in view of US 20220114417 A1, hereinafter referenced as DALLI, and further in view of Li, Qiwei, et al. "Evaluating short-term forecasting of COVID-19 cases among different epidemiological models under a Bayesian framework." GigaScience 10.2 (2021), hereinafter referenced as LI. Regarding Claim 3 BLEDSOE, YUAN, and SALUNKE disclose the method of claim 1. However, BLEDSOE, YUAN, and SALUNKE fail to explicitly teach: further comprising outputting a local explanation, a global explanation, a fitted series, and a rolling-origin cross-validation error. However, in a related field of endeavor (generating explanations for machine learning models, see para. 0011), DALLI teaches: further comprising outputting a local explanation, a global explanation, a fitted series ... (DALLI, para. 0089: “In an exemplary embodiment, the model fit evaluation implementation in hypotheses and concepts component 1511 may be used to determine whether the estimator functions in mediations component 1514 have a high level of fit possibly under increasingly specific constraints or a lower level of fit possibly under more generic and widely applicable constraints.”; DALLI, para. 0142: “An exemplary evaluation component 1535 may also implement statistical tests to determine the quality of model fits to the data, especially for the associations and assumptions part in hypothetical and causal component 1510, with typical tests using the chi-squared test, root mean square error of approximation (RMSEA), comparative fit index (CFI), standardized root mean square residual (SRMR), and other suitable tests. Evaluation component 1535 may also implement causal tests to determine the quality of the casual model fits to the causal model and to the data, especially for the interventions and counterfactuals parts in hypothetical and causal component 1510.” DALLI, para. 0160: “The explanation 9141 may contain multiple types of explanations that can be output individually or together according to some selection criteria. Different types of explanations may include a combination of: (i.) local explanations that are concerned about a particular sub-set of the explanation domain and information domain being used by the underlying machine learning system such as the model 904; (ii.) global explanations that are concerned about the entire explanation domain and information domain being used by the underlying machine learning system such as the model 904”; Examiner’s Note: DALLI discloses global explanations and local explanations (at para. 0160), and further discusses fitting the data to the model and evaluating the level of fit, requiring the series to be fitted in order to be evaluated; the BLEDSOE-YUAN-SALUNKE-DALLI combination modifies the forecasting system of BLEDSOE to also output the global and local explanations of DALLI, as well as the fitted series as calculated by DALLI.) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN, SALUNKE, and DALLI as explained above. As disclosed by DALLI, one of ordinary skill would have been motivated to do so in order to provide “explanations and interpretations aimed at different levels of user expertise” so that different user levels can have better understandings about how the model operates. (para. 0011). However, BLEDSOE, YUAN, SALUNKE and DALLI fail to explicitly teach: and a rolling-origin cross-validation error. However, in a related field of endeavor (data forecasting), LI teaches: and a rolling-origin cross-validation error. (LI, p. 2, Background section: “We perform the rolling-origin cross-validation (ROCV) procedure to compare the prediction error of different stochastic models.”LI, p. 3, “Model comparison through rolling-origin cross-validation”: “Figure 2 shows the ROCV representation for an example of time-series data (T = 17).” Examiner’s Note: the BLEDSOE-YUAN-SALUNKE-DALLI-LI combination modifies the forecasting system of BLEDSOE to also output the ROCV measures of LI with respect to comparing the prediction error of different models) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN, SALUNKE, DALLI, and LI as explained above. As disclosed by LI, one of ordinary skill would be motivated to do so because the ROCV technique “circumvents” a particular issue where a “key assumption of CV is that all data points should be independent and identically distributed (i.i.d.). Unfortunately, time-series data are serially auto-correlated, meaning that the observations are dependent on the time they were recorded” (p. 3, “Model comparison through rolling-origin cross-validation” section) Claim 10 depends from claim 8 and claims a computing system that corresponds to the method of claim 3, and is therefore rejected for the same reasons explained above with respect to claims 3 and 8. Claims 6, 13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over BLEDSOE in view of YUAN and SALUNKE and further in view of US 20220036387 A1, hereinafter referenced as PAPADIMITRIOU. Regarding Claim 6 BLEDSOE, YUAN, and SALUNKE disclose the method of claim 1. However, BLEDSOE, YUAN, and SALUNKE fail to explicitly teach: wherein the machine learning forecasting model implements a gradient boosting technique to generate the forecasted value for the time-series data. However, in a related field of endeavor (forecasting through a multi-level machine learning system, para. 0001), PAPADIMITRIOU teaches: wherein the machine learning forecasting model implements a gradient boosting technique to generate the forecasted value for the time-series data. (PAPADIMITRIOU, para. 0050: “In an embodiment, multivariate machine learning model 214 comprises XGBoost. XGBoost is a gradient boosted tree implementation. The time series models cannot use extra features such as macroeconomic data in their predictions. XGBoost is suitable for handling macroeconomic data since it is able to handle missing values. XGBoost also has an advantage in speed of training and performance compared to other machine learning algorithms.”; (EN): the BLEDSOE-YUAN-SALUNKE-PAPADIMITRIOU combination now modifies the machine learning forecasting system of BLEDSOE to use the XGBoost gradient boosting techniques of PAPADIMITRIOU in order to provide an advantage in speed of training and performance.) Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN, SALUNKE, and PAPADIMITRIOU as explained above. As disclosed by PAPADIMITRIOU, one of ordinary skill would be motivated to do so because “XGBoost also has an advantage in speed of training and performance compared to other machine learning algorithms.” (para. 0050). Claim 13 depends from claim 8 and claims a computing system that corresponds to the method of claim 6, and is therefore rejected for the same reasons explained above with respect to claims 6 and 8. Claim 20 depends from claim 15 and claims a non-transitory computer-readable medium that corresponds to the method of claim 6, and is therefore rejected for the same reasons explained above with respect to claims 6 and 15. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over BLEDSOE in view of YUAN and SALUNKE and further in view of Thejas, G. S., et al. "A multi-time-scale time series analysis for click fraud forecasting using binary labeled imbalanced dataset." 2019 4th International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS). IEEE, 2019, hereinafter reference as THEJAS. B BLEDSOE, YUAN, and SALUNKE disclose the method of claim 1. However, BLEDSOE, YUAN, and SALUNKE fail to explicitly teach: wherein the time-varying feature comprises a multi-time step rolling mean value. However, in a related field of endeavor (forecasting using multi-time-scale time series analysis), THEJAS teaches: wherein the time-varying feature comprises a multi-time step rolling mean value. (THEJAS, p. 2, section 1: “In our proposed approach, we model multi-timescale seasonality to forecast the fraudulent behavior in terms of minutes and hours interval. ... Our proposed approach can be considered as an extension of Seasonal Auto-regressive Integrated Moving (SARIMA) model”; THEJAS, p. 6, section 4: “In this paper, we presented a generalized multi-timescale time series model to forecast click fraud behavior in terms of minutes and hours. Our proposed approach also allows us to forecast the behavior in terms of seconds, and even a smaller timescale could be examined. This is the very first attempt that has been made in this regard, which deals with forecasting click fraud behavior using AR and MA time series modelling.” Examiner’s Note: THEJAS teaches modeling using multi-time scale time series data (corresponding to multi-time step) where such modeling uses a moving average (corresponding to “rolling mean value”); the BLEDSOE-YUAN-SALUNKE-THEJAS combination now modifies the machine learning forecasting system of BLEDSOE to use the moving average modelling of THEJAS with respect to multi-time scale time series data, as a feature to be input into the machine learning forecasting model. Before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the machine learning forecasting techniques of BLEDSOE with the teachings of YUAN, SALUNKE, and THEJAS as explained above. One of ordinary skill would understand the benefit of using a moving average value, based on a multi-time scale, in order to input a rough smoothing estimate of the forecast as input to the machine learning model for consideration. Claim 14 depends from claim 8 and claims a computing system that corresponds to the method of claim 7, and is therefore rejected for the same reasons explained above with respect to claims 7 and 8. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20200336302 A1 (Jetchev). “More concretely, consider x={x.sub.t} (independent time series) and y={y.sub.t} (dependent time series). One is trying to predict y.sub.t in terms of the time-lagged time series y.sub.t-δ (which represents the time series y time shifted by a lag δ) and x.sub.t-δ, (which represents the time series x time shifted by a lag δ′) for various values of the lags δ and δ′.” (see para. 0025). US 20210397177 A1 (Nataraj). “In one or more cases, the phase space construction engine 112 may estimate the time lag for a processed signal in a time series by plotting the time series for the processed signal against another processed signal that is later in time. In one or more other cases, the phase space construction engine 112 may estimate the time lag for a processed signal in a time series by shifting the time series of the processed signal, and comparing the original time series of the processed signal to the shifted time series of the processed signal.” (para. 0039). US 20220147397 A1 (Bhattacharya). “The algorithm calculates the observed time-delay by estimating the time delay at which the correlation reaches its maximum value. To find the maximum correlation given the time series independent and dependent variables, one of the variables may be anchored while the other is shifted through time by estimated time lag values.” (para. 0042). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C LEE whose telephone number is (571)272-4933. The examiner can normally be reached M-F 12:00 pm - 8:00 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL C. LEE/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Jun 30, 2022
Application Filed
Jun 29, 2025
Non-Final Rejection — §101, §103
Oct 17, 2025
Interview Requested
Nov 05, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Examiner Interview Summary
Nov 25, 2025
Response Filed
Dec 17, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603081
METHOD AND SERVER FOR A TEXT-TO-SPEECH PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12602605
QUANTUM COMPUTER ARCHITECTURE BASED ON MULTI-QUBIT GATES
2y 5m to grant Granted Apr 14, 2026
Patent 12591915
METHODS AND SYSTEMS FOR DETERMINING RECOMMENDATIONS BASED ON REAL-TIME OPTIMIZATION OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 31, 2026
Patent 12585743
INTERFACE ACCESS PROCESSING METHOD, COMPUTER DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12568935
AI-BASED LIVESTOCK MANAGEMENT SYSTEM AND LIVESTOCK MANAGEMENT METHOD THEREOF
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
86%
With Interview (+27.1%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month