Prosecution Insights
Last updated: April 18, 2026
Application No. 17/804,082

Regression and Time Series Forecasting

Non-Final OA §103
Filed
May 25, 2022
Examiner
KHAN, SHAHID K
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
287 granted / 389 resolved
+18.8% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
420
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 389 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the after-final amendment filed 02/04/26 in which claims 1 and 11 were amended, and claims 8 and 18 were canceled. Claims 1-7, 9-17, and 19-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/04/26 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1 and 10 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Further description of “basis regularization” as a “data-dependent global basis” may help to distinguish over the prior art. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1, 2, 5-7, 9, 11, 12, and 15-17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Rivera-Castro, Rodrigo, Ivan Nazarov, and Evgeny Burnaev. "Towards forecast techniques for business analysts of large commercial data sets using matrix factorization methods." arXiv preprint arXiv:2009.04359 (2020) (“Rivera-Castro”) in view of Shiratori, Tomokaze, Ken Kobayashi, and Yuichi Takano. "Prediction of hierarchical time series using structured regularization and its application to artificial neural networks." Plos one 15.11 (2020): e0242099 (“Shiratori”) and Yu, Hsiang-Fu, Nikhil Rao, and Inderjit S. Dhillon. "Temporal regularized matrix factorization for high-dimensional time series prediction." Advances in neural information processing systems 29 (2016) (“Yu”). Regarding claim 1, Rivera-Castro discloses [a] computer-implemented method when executed by data processing hardware causes the data processing hardware to perform operations comprising: obtaining a set of hierarchical time series, each time series in the set of hierarchical time series comprising a plurality of time series data values; (Abstract (“The subject of the research is forecasting product demand using techniques for multivariate hierarchical time series prediction.”), p. 5 (“Let Y be Txn matrix of observations of n objects spanning the period of T time steps, i.e. each column i=1,…,n of Y is a time series y(i)=(Yti)Tt=1 related to the i-th object.”)) determining, using the set of hierarchical time series, a basis regularization of the set of hierarchical time series, (p. 5 (“The problem of factorizing a fully or partially observed T × n matrix Y consists of finding d-dimensional factors X and the corresponding factor loadings F, in the form of T ×d and d×n matrices respectively, such that their product XF most accurately recovers the observed Y, i.e. Yti ≈ ∑ j = 1 d X t F j i . This is usually achieved by solving the following optimization problem: PNG media_image1.png 108 758 media_image1.png Greyscale where λF and λX [basis regularization] are non-negative regularization coefficients which govern the trade-off between the reconstruction error and the regularizing terms RF and RX.”); see also p. 6 equation 4 which teaches further improvement to equation 3 by imposing certain structural requirements on X to apply matrix factorization to time series prediction). Rivera-Castro does not expressly disclose wherein the basis regularization represents a data-dependent global basis of the set of hierarchical time series; (but see Shiratori Abstract (“This paper discusses the prediction of hierarchical time series, where each upper-level time series is calculated by summing appropriate lower-level time series. Forecasts for such hierarchical time series should be coherent, meaning that the forecast for an upper-level time series equals the sum of forecasts for corresponding lower-level time series. Previous methods for making coherent forecasts consist of two phases: first computing base (incoherent) forecasts and then reconciling those forecasts based on their inherent hierarchical structure. To improve time series predictions, we propose a structured regularization method for completing both phases simultaneously. The proposed method is based on a prediction model for bottom-level time series and uses a structured regularization term to incorporate upper level forecasts into the prediction model. We also develop a backpropagation algorithm specialized for applying our method to artificial neural networks for time series prediction. Experimental results using synthetic and real-world datasets demonstrate that our method is comparable in terms of prediction accuracy and computational efficiency to other methods for time series prediction.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rivera-Carter to incorporate the teachings of Shiratori to incorporate the structured regularization term to incorporate upper-level forecasts into the prediction model by modifying the non-negative regularization coefficients, at least because doing so uses inherent structural relations among explanatory variables to construct a statistical model for hierarchical time series data. See Shiratori Introduction. determining, using the set of hierarchical time series, an embedding regularization of the set of hierarchical time series; (p. 5 (“The problem of factorizing a fully or partially observed T × n matrix Y consists of finding d-dimensional factors X and the corresponding factor loadings F, in the form of T ×d and d×n matrices respectively, such that their product XF most accurately recovers the observed Y, i.e. Yti ≈ ∑ j = 1 d X t F j i . This is usually achieved by solving the following optimization problem: PNG media_image1.png 108 758 media_image1.png Greyscale where λF [embedding regularization] and λX are non-negative regularization coefficients which govern the trade-off between the reconstruction error and the regularizing terms RF and RX.”); see also p. 6 equation 4 which teaches further improvement to equation 3 by imposing certain structural requirements on X to apply matrix factorization to time series prediction). Rivera-Castro does not expressly disclose wherein the embedding regularization provides a coherence constraint on a model (but see Yu pg. 2 (“In this paper, we propose a novel temporal regularized matrix factorization framework (TRMF) for high-dimensional time series analysis. In TRMF, we consider a principled approach to describe the structure of temporal dependencies among latent temporal embeddings {xt} and design a temporal regularizer to incorporate this temporal structure into the standard MF formulation. Unlike most existing MF approaches, our TRMF method supports data-driven temporal dependency learning and also brings the ability to forecast future values to a matrix factorization approach. In addition, inherited from the property of MF approaches, TRMF can easily handle high-dimensional time series data even in the presence of many missing values. As a specific example, we demonstrate a novel autoregressive temporal regularizer which encourages AR structure among temporal embeddings {xt}.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rivera-Castro to incorporate the teachings of Yu to impose autoregressive time-series properties on the regularizing term Rf, at least because doing so would enable handling high-dimensional time series data even in the presence of many missing values. Rivera-Castro further discloses training a model using the set of hierarchical time series and a loss function based on the basis regularization and the embedding regularization, (p. 5 (equation 3 indicates training in the form of minimization of F, X based on target matrix Y); see also p. 6 (“The problem (4) or, in general, any biconvex problem of the form (3), is solved numerically using coordinate descent algorithms that alternates between minimizing the objective with respect to each matrix until some convergence criteria are met.”)). Rivera-Castro does not expressly disclose wherein the embedding regularization provides a coherence constraint on the trained model (but see Shiratori Introduction (last paragraph) (“In this study, we aimed to develop a structured regularization method that takes full advantage of hierarchical structure for better time series predictions. Our method is based on a prediction model for bottom-level time series and uses a structured regularization term to incorporate upper-level forecasts into the prediction model. This study particularly focused on applying our method to artificial neural networks, which have been effectively used in time series prediction [33–38]. We developed a backpropagation algorithm specialized for our structured regularization model based on artificial neural networks. Experiments involving the application of our method to synthetic and real-world datasets demonstrated that our method was comparable in terms of prediction accuracy and computational efficiency to other methods that develop coherent forecasts for hierarchical time series.”)). Rivera-Castro is combinable with Shiratori for the same reasons as set forth above. Rivera-Castro further discloses: forecasting, using the trained model and one of the time series in the set of hierarchical time series, an expected time series data value in the one of the time series (p. 6 (“Forecasting beyond the last observation in Y is done using the estimated latent factors X and the parameters θ of their autoregressive dynamics.”)). Claim 11 is a system claim corresponding to claim 1 and is similarly rejected. Regarding claim 2, Rivera-Castro, in view of Shiratori and Yu, discloses the invention of claim 1 as discussed above. Rivera-Castro further discloses wherein the loss function comprises minimizing a sum of a mean absolute error, the basis regularization, and the embedding regularization (p. 5 (equation (3) is a loss function that optimizes F,X by minimizing the sum of a mean absolute error, the basis regularization and embedding regularizations)). Claim 12 is a system claim corresponding to claim 2 and is similarly rejected. Regarding claim 5, Rivera-Castro, in view of Shiratori and Yu, discloses the invention of claim 1 as discussed above. Rivera-Castro further discloses wherein the set of hierarchical time series comprises a pre-defined hierarchy of a plurality of nodes, each node associated with one of the time series data values (Abstract (“The subject of the research is forecasting product demand using techniques for multivariate hierarchical time series prediction that are both precise and accessible to non-technical business experts.”)). Claim 15 is a system claim corresponding to claim 5 and is similarly rejected. Regarding claim 6, Rivera-Castro, in view of Shiratori and Yu, discloses the invention of claim 1 as discussed above. Rivera-Castro further discloses wherein the basis regularization is based on a set of basis vectors associated with the set of hierarchical time series (p. 5 (“The problem of factorizing a fully or partially observed T × n matrix Y consists of finding d-dimensional factors X [basis vectors] and the corresponding factor loadings F, in the form of T ×d and d×n matrices respectively, such that their product XF most accurately recovers the observed Y, i.e. Yti ≈ ∑d j=1XtjFji.”)). Claim 16 is a system claim corresponding to claim 6 and is similarly rejected. Regarding claim 7, Rivera-Castro, in view of Shiratori and Yu, discloses the invention of claim 1 as discussed above. Rivera-Castro further discloses wherein the embedding regularization is based on a set of weight vectors associated with the set of hierarchical time series (p. 5 (“The problem of factorizing a fully or partially observed T × n matrix Y consists of finding d-dimensional factors X and the corresponding factor loadings F [weight vectors], in the form of T ×d and d×n matrices respectively, such that their product XF most accurately recovers the observed Y, i.e. Yti ≈ ∑d j=1XtjFji.”)). Claim 17 is a system claim corresponding to claim 7 and is similarly rejected. Regarding claim 9, Rivera-Castro, in view of Shiratori and Yu, discloses the invention of claim 1 as discussed above. Rivera-Castro further discloses wherein the model comprises a differentiable learning model (p. 6 (“The problem (4) or, in general, any biconvex problem of the form (3), is solved numerically using coordinate descent algorithms [differentiable learning model] that alternates between minimizing the objective with respect to each matrix until some convergence criteria are met.”)). Claim 19 is a system claim corresponding to claim 9 and is similarly rejected. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Rivera-Castro, Shiratori, and Yu as applied to claims 1 and 11 above, and further in view of Konečný, Jakub, et al. "Mini-batch semi-stochastic gradient descent in the proximal setting." IEEE Journal of Selected Topics in Signal Processing 10.2 (2015): 242-255 (“Konecny”). Regarding claim 3, Rivera-Castro, in view of Shiratori and Yu, discloses the invention of claim 1 as discussed above. Rivera-Castro does not expressly disclose wherein training the model comprises using mini-batch stochastic gradient descent (but see Konecny Abstract (describing the use of mini-batching in the computation of stochastic gradient descent optimization)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rivera-Castro to incorporate the teachings of Konecny to use mini-batch stochastic gradient descent to optimize the loss function in equation (3), at least because doing so would enable performance improvements by way of simpler parallel implementation. See Konecny Abstract. Claim 13 is a system claim corresponding to claim 3 and is similarly rejected. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Rivera-Castro, Shiratori, and Yu as applied to claims 1 and 11 above, and further in view of Tanaka, Yusuke, et al. "Refining coarse-grained spatial data using auxiliary spatial data sets with various granularities." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019 (“Tanaka”). Regarding claim 4, Rivera-Castro, in view of Shiratori and Yu, discloses the invention of claim 1 as discussed above. Rivera-Castro does not expressly disclose wherein the operations further comprise, prior to training the model, for each respective time series data value, downscaling the respective time series data value based on a level of hierarchy associated with the respective time series data value (but see Tanaka p. 5092 (“Regression models (linear and non-linear) are used for estimating the relationships between target data and auxiliary data sets. A few methods can construct the regression models under the spatial aggregation constraints (Murakami and Tsutsumi 2011; Park 2013). The constraints state that a value associated with a coarse-grained region is a linear average of their constituent values in a fine-grained partition.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rivera-Castro to incorporate the teachings of Tanaka to use an aggregation constraint in which the value at a node associated with a high-level coarse-grained region is a linear average of its constituent values in a fine grained partition. Claim 14 is a system claim corresponding to claim 4 and is similarly rejected. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Rivera-Castro, Shiratori, and Yu as applied to claims 9 and 19 above, and further in view of Carbonneau, Real, Kevin Laframboise, and Rustam Vahidov. "Application of machine learning techniques for supply chain demand forecasting." European journal of operational research 184.3 (2008): 1140-1154 (“Carbonneau”). Regarding claim 10, Rivera-Castro, in view of Shiratori and Yu, discloses the invention of claim 9 as discussed above. Rivera-Castro teaches using novel machine learning methods for short-term demand forecasting, see p. 2, but does not expressly disclose wherein the differentiable learning model comprises a recurrent neural network, a temporal convolutional network, or a long short term memory network (but see Carbonneau Abstract (“Full collaboration in supply chains is an ideal that the participant firms should try to achieve. However, a number of factors hamper real progress in this direction. Therefore, there is a need for forecasting demand by the participants in the absence of full information about other participants’ demand. In this paper we investigate the applicability of advanced machine learning techniques, including neural networks, recurrent neural networks, and support vector machines, to forecasting distorted demand at the end of a supply chain (bullwhip effect).”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rivera-Castro to incorporate the teachings of Carbonneau to use a recurrent neural network as the machine learning model to forecast short-term demand, at least because doing so would provide better results. See Rivera-Castro p. 2. Claim 20 is a system claim corresponding to claim 10 and is similarly rejected. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHID KHAN whose telephone number is (571)270-0419. The examiner can normally be reached M-F, 9-5 est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571)272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAHID K KHAN/Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

May 25, 2022
Application Filed
May 05, 2025
Non-Final Rejection — §103
Jul 21, 2025
Interview Requested
Jul 28, 2025
Applicant Interview (Telephonic)
Jul 30, 2025
Examiner Interview Summary
Aug 08, 2025
Response Filed
Oct 31, 2025
Final Rejection — §103
Feb 04, 2026
Response after Non-Final Action
Feb 26, 2026
Request for Continued Examination
Mar 09, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591768
DEEP LEARNING ACCELERATION WITH MIXED PRECISION
2y 5m to grant Granted Mar 31, 2026
Patent 12579516
System and Method for Organizing and Designing Comment
2y 5m to grant Granted Mar 17, 2026
Patent 12566813
SYSTEMS AND METHODS FOR RENDERING INTERACTIVE WEB PAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547298
Display Method and Electronic Device
2y 5m to grant Granted Feb 10, 2026
Patent 12530916
MULTIMODAL MULTITASK MACHINE LEARNING SYSTEM FOR DOCUMENT INTELLIGENCE TASKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
90%
With Interview (+15.7%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 389 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month