Prosecution Insights
Last updated: April 19, 2026
Application No. 16/496,784

METHOD AND SYSTEM FOR ADJUSTABLE AUTOMATED FORECASTS

Non-Final OA §101§103
Filed
Sep 23, 2019
Examiner
SCHEUNEMANN, RICHARD N
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kinaxis Inc.
OA Round
9 (Non-Final)
6%
Grant Probability
At Risk
9-10
OA Rounds
4y 7m
To Grant
15%
With Interview

Examiner Intelligence

Grants only 6% of cases
6%
Career Allow Rate
35 granted / 551 resolved
-45.6% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
56 currently pending
Career history
607
Total Applications
across all art units

Statute-Specific Performance

§101
37.4%
-2.6% vs TC avg
§103
37.6%
-2.4% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 551 resolved cases

Office Action

§101 §103
Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114 was filed in this application after a decision by the Patent Trial and Appeal Board, but before the filing of a Notice of Appeal to the Court of Appeals for the Federal Circuit or the commencement of a civil action. Since this application is eligible for continued examination under 37 CFR 1.114 and the fee set forth in 37 CFR 1.17(e) has been timely paid, the appeal has been withdrawn pursuant to 37 CFR 1.114 and prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant’s submission filed on December 22, 2025, has been entered. Claims 1 and 8 are amended. Claim 5 is canceled. Claims 1-4, 6, 8-13, and 15-22 are pending. Response to Remarks 35 USC §101 Rejections The rejection for lack of subject matter eligibility is maintained for the reasons set forth in the rejection, below. The recitation of supervised machine learning does not render the claims eligible, at least because the recitation merely amounts to an environment for implementing the abstract idea of generating a forecast. 35 USC §103 Rejections Amendments to independent claims 1 and 8 changed the scope of the claims, necessitating further consideration of the prior art references. The independent claims remain obvious over Wu in view of Tariq, and Abe; as set forth, below. Supervised reinforcement learning is a conventional machine learning method, as evidenced by the cited portions of Abe. The rejection of the dependent claims stands or falls with the rejection of the independent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The Manual of Patent Examining Procedure (MPEP) provides detailed rules for determining subject matter eligibility for claims in §2106. Those rules provide a basis for the analysis and finding of ineligibility that follows. Claims 1-4, 6, 8-13, and 15-22 are rejected under 35 U.S.C. 101. The claimed invention is directed to non-statutory subject matter because the claimed invention recites a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Although claims(s) 1-4, 6, 8-13, and 15-22 are all directed to one of the four statutory categories of invention, the claims are directed to generating a forecast (as evidenced by exemplary claim 1; “generating . . . a graphical representation of the forecast;” and “generating . . . an adjusted outcome measure of the forecast”), an abstract idea. Certain methods of organizing human activity are ineligible abstract ideas, including managing personal behavior or relationships or interactions between people. See MPEP §2106.04(a)(2). The limitations of exemplary claim 1 include: “storing . . . historical data;” “receiving . . . at least one input parameter;” “determining . . . a set of forecasts each based on the different parameters. . . and determining at least one set of optimized parameters;” “generating . . . a graphical representation of the forecast;” “outputting . . . the graphical representation of the forecast to the user;” “receiving . . . an adjustment to at least one of the input parameters . . . or the optimized parameters;” “determining . . . an adjusted outcome measure of the forecast for the promotion;” “generating . . . an adjusted graphical representation of the forecast;” and “displaying . . . the adjusted graphical representation.” The steps are all steps for managing personal behavior related to the abstract idea of generating a forecast that, when considered individually and as a whole, are part of the abstract idea of generating a forecast. The dependent claims further recite steps for data analysis (see claims 2-6, 9-13, 15, and 16-18) and data display (see claims 2, 9, and 19-22) that are part of the abstract idea of generating a forecast. These claim elements, when considered alone and in combination, are considered to be abstract ideas because they are directed to a method of organizing human activity which includes steps a human being could follow to analyze sales generated from a marketing campaign. Under step 2A of the subject matter eligibility analysis, a claim that is directed to a judicial exception must be evaluated to determine whether the claim provides a practical application of the judicial exception. Additional elements of the independent claims amount to generic computer networking hardware that does not provide a practical application (a computer-implemented method of independent claim 1; and a system with processors, a storage device, and interface in independent claim 8. Amended language of the claims includes a server and a client device, which are also generic computer hardware). The claims do recite the use of machine learning with a neural network (“training or instantiating . . . a machine learning model with a training set . . .;” and “adjusting the machine learning model . . .” See exemplary claim 1), but the abstract idea of generating a forecast is generally linked to a supervised machine learning environment and neural network for implementation. Therefore, the recitation of machine learning does not provide a practical application. See MPEP §2106.05(h). The claims do not recite an improvement to another technology or technical field, nor do they recite an improvement to the functioning of the computer itself. See MPEP §2106.05(a). The claims require no more than a generic computer (a computer-implemented method and processors in independent claim 1; and a system with processors, a storage device, and interface in independent claim 8) to implement the abstract idea, which does not amount to significantly more than an abstract idea. See MPEP §2106.05(f). Because the claims only recite use of a generic computer, they do not apply the judicial exception with a particular machine. See MPEP §2106.05(b). For these reasons, the claims do not provide a practical application of the abstract idea, nor do they amount to significantly more than an abstract idea under step 2B of the subject matter eligibility analysis. Using a generic computer to implement an abstract idea does not provide an inventive concept. Therefore, the claims recite ineligible subject matter under 35 USC §101. Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. Claim(s) 1, 2, 4, 8, 9, 11, and 17-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 8,010,404 B1 to Wu et al. (hereinafter ‘WU’) in view of US 20170220947 A1 to Tariq (hereinafter ‘TARIQ’), and US 20040015386 A1 to Abe et al. (hereinafter ‘ABE’). Claim 1 (Currently Amended) WU discloses a computer-implemented (see col 37, ln 54-col 38, ln 2; and Figs. 7-9; the invention functions over a computer network). WU does not explicitly disclose, but TARIQ discloses, cloud-based method (see ¶[0030]; a cloud-based application) for adjusting a machine learning model (see ¶[0007]; an artificial neural network with hidden layers and nodes that approximate behavior based on a trained statistical model) concurrently with generation of adjustable automated forecasts for a promotion (see ¶[0045] and [0050]; marketing data and sales data), the method comprising: storing, by a processor on a server (see ¶[0056]-[0057]; metrics are stored on a client device of a user. Computing devices may implement the method, as either a client or server or plurality of servers), historical data (see abstract; historical data associated with the user) WU further discloses related to one or more products and a plurality of previous promotions and their respective parameters (see col 39, ln 11-27 and Figs. 22 and 50; a merchandising decomposition analysis [MDA] engine with point of sale and historic data. See also col 7, ln 26-50 & col 16, ln 45-55 and Fig. 2; marketing conditions, price points, and price discounts); receiving, by the processor (see col 36, ln 64-col 37, ln 18 & Fig. 9; a computer system with processors). WU does not explicitly disclose, but TARIQ discloses, on said server, from a client device of a user communicatively coupled to said server via a network (see ¶[0057]-[0059] & [0075]-[0076] and Fig. 4; computing devices may be used to implement the systems and methods as either a client or as a server or plurality of servers. Components can be interconnected on a network), WU further discloses at least one input parameter for the promotion (see col 7, ln 26-50 & col 11, ln 45-64; marketing conditions, price points, and causal variables representing an advertisement. See also col 12, ln 10-17; advertisement price. See also col 25, ln 61-col 26, ln 3; initial dataset information is input into the system); training or instantiating, by the processor on said server, a machine learning model with a training set, the training set comprising the received historical data and the received at least one input parameter (see col 24, ln 59-63 and col 39, ln 11-27 and Fig. 19B; sales volume predicted based on price using a regression model using point of sale and historic data); automatically determining, by the processor on said server, using the machine learning model (see col 24, ln 59-63 and col 39, ln 11-27 and Fig. 19B; sales volume predicted based on price using a regression model using point of sale and historic data), a set of forecasts each based on different parameters, and determining at least one set of optimized parameters (see col 1, ln 14-21; price and promotion response analysis to provide fast and efficient forecasts with price optimization for business planning) that maximize an outcome measure of the forecast for the promotion (see col 1, ln 22-27; maximizing profit or demand or for a variety of other objectives), wherein the outcome measure is a predictive or explanatory score of the outcome of the promotion (see again col 1, ln 22-27; maximizing profit or demand or for a variety of other objectives. See also col 24, ln 49-63; predict promotional effects and actual sales volume. See also col 27, ln 53-61; predict demand), automatically generating, by the processor on said server, a graphical representation of the forecast having the maximized outcome measure (see col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration); automatically outputting on the client device, the at least one input parameter, at least one optimized parameter, and the graphical representation of the forecast to the user (see col 7, ln 3-25; the output of the econometric engine is the input of the optimization engine. MDA engine is coupled to the econometric engine and the financial model engine. See also col 8, ln 41-52 and col 40, ln 1-19 & Figs. 23, 65, and 71; imputed econometric variables may be output to other applications. The forecast is displayed by the user interface to the user. Avg price and cost parameters displayed in the promotion response analysis with associated forecasted lifts); receiving, by the processor on the server, from the user via a parameter input interface object presented on the client device together with the graphical representation (see col 36, ln 53-63 & col 37, ln 18-34; computer system includes a keyboard and mouse. CPU includes touch-sensitive displays), an adjustment to at least one of the input parameters or at least one of the optimized parameters (see col 44, ln 43-col 45, ln 33 & Fig. 28; an adjusting factor may be the difference between pre- and post- optimization retail prices. Forecasted profits may be compared to pre-optimization profits, and raw profit benefit may be adjusted). WU does not specifically disclose, but ABE discloses, adjusting the machine learning model with the adjustment to the at least one of the input parameters or the at least one of the optimized parameters (see ¶[0099]-[0101]; use batch reinforcement learning and reformulate value iteration as a supervised learning problem. Characterize inputs states and actions on subsequent iterations to recalculate target values). WU further discloses automatically determining by the processor of the server, in real time (see col 36, ln 64-col 37, ln 17; a computer system with processors and video displays), an adjusted outcome measure of the forecast for the promotion by applying the adjustment to the machine learning model (see col 46, ln 61-col 47, ln 6; raw benefit may be based on promo exclusion rules configured by the user via the user interface); automatically generating, by the processor on the server, an adjusted graphical representation of the forecast having the adjusted outcome measure (see again col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration); and automatically displaying, in real-time, (see col 64, ln 56-63; the promotion response analysis uses the coefficients and multipliers created during the most recent modeling run. See also col 36, ln 64-col 37, ln 17; a computer system with processors and video displays) on the client device, the adjusted graphical representation to the user (see again col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration). WU does not explicitly disclose, but TARIQ discloses, wherein the machine learning model is a neural network machine learning model (see abstract and ¶[0002]-[0004]; a forecast using an artificial neural network. Utilize statistical machine learning techniques using an artificial neural network to predict a set of data metrics), wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises changing weighting given to at least one of the input parameters or the optimized parameters by the machine learning model (see ¶[0002]; artificial neural networks include sets of adaptive weights such as numerical parameters that are adjusted by a learning algorithm). WU discloses systems and methods for price and promotion response analysis that is implemented using a regression model and optimizing algorithm via a computer system that functions over a network (see col 3, ln 4-10 & Figs. 7-9). TARIQ discloses computing data metrics using neural networks with adaptive weights that make predictions using marketing data that is implemented on a cloud system with servers and client devices. It would have been obvious to include the neural network and cloud system as taught by TARIQ in the system executing the method of WU with the motivation to analyze promotion responses via a computer system. WU discloses systems and methods for price and promotion response analysis that is implemented using a regression model and optimizing algorithm via a computer system that functions over a network (see col 3, ln 4-10 & Figs. 7-9) to maximize profits (see col 1, ln 22-27). ABE discloses decision making for customer relationship management that includes the use of supervised reinforcement learning to maximize profits with respect to promotions (see ¶[0008]-[0009]. It would have been obvious to include supervised reinforcement learning as taught by ABE in the system executing the method of WU with the motivation to maximize profits for promotional campaigns. Claim 2 (Previously Presented) The combination of WU, TARIQ, and ABE discloses the method as set forth in claim 1. WU additionally discloses further comprising receiving, by the processor on said server, from the client device (see col 36, ln 64-col 37, ln 17 and Fig. 9; a computer system and processors), a subsequent adjustment to at least one of the input parameters or at least one of the optimized parameters from the user (see col 44, ln 43-col 45, ln 33 & Fig. 28; an adjusting factor may be the difference between pre- and post- optimization retail prices. Forecasted profits may be compared to pre-optimization profits, and raw profit benefit may be adjusted), and performing: automatically determining, by the processor on said server, a subsequent adjusted outcome measure of the forecast for the promotion by applying the subsequent adjustment to the machine learning model (see col 46, ln 61-col 47, ln 6; raw benefit may be based on promo exclusion rules configured by the user via the user interface); automatically generating, by the processor on said server, a subsequent adjusted graphical representation of forecast having the subsequent adjusted outcome measure (see again col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration); and automatically displaying, by the client device, the subsequent adjusted graphical representation to the user (see again col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration). Claim 4 (Original) The combination of WU, TARIQ, and ABE discloses the method as set forth in claim 1. WU further discloses wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises changing a value of at least one of the input parameters or the optimized parameters (see col 44, ln 43-col 45, ln 33 & Fig. 28; an adjusting factor may be the difference between pre- and post- optimization retail prices. Forecasted profits may be compared to pre-optimization profits, and raw profit benefit may be adjusted. See also col 46, ln 61-col 47, ln 6; raw benefit may be based on promo exclusion rules configured by the user via the user interface). Claim 8 (Currently Amended) WU does not explicitly disclose, but TARIQ discloses, a cloud-based system (see ¶[0030]; a cloud-based application) for generation of adjustable automated forecasts for a promotion (see ¶[0045] and [0050]; marketing data and sales data), the system comprising a client device communicatively coupled to a server via a network, said server comprising one or more processors and a data storage device (see ¶[0056]-[0057]; metrics are stored on a client device of a user. Computing devices may implement the method, as either a client or server or plurality of servers) WU discloses the one or more processors (see col 36, ln 64-col 37, ln 18 & Fig. 9; a computer system with processors) configured to execute: training or instantiating a machine learning model with a training set, the training set comprising received historical data and at least one input parameter (see again col 24, ln 59-63 and col 39, ln 11-27 and Fig. 19B; sales volume predicted based on price using a regression model using point of sale and historic data). WU does not explicitly disclose, but TARIQ discloses, the server further comprising: a network interface and an interface module (see ¶[0057]-[0059] & [0075]-[0076] and Fig. 4; computing devices may be used to implement the systems and methods as either a client or as a server or plurality of servers. Components can be interconnected on a network), WU further discloses to receive at least one input parameter for the promotion from the client device of the user (see col 7, ln 26-50 & col 11, ln 45-64; marketing conditions, price points, and causal variables representing an advertisement. See also col 12, ln 10-17; advertisement price. See also col 25, ln 61-col 26, ln 3; initial dataset information is input into the system); a forecasting module to automatically determine, using the machine learning model (see col 24, ln 59-63 and col 39, ln 11-27 and Fig. 19B; sales volume predicted based on price using a regression model using point of sale and historic data), a set of forecasts each based on different parameters, and to determine at least one set of optimized parameters (see col 1, ln 14-21; price and promotion response analysis to provide fast and efficient forecasts with price optimization for business planning) that maximize an outcome measure of the forecast for the promotion (see col 1, ln 22-27; maximizing profit or demand or for a variety of other objectives), wherein the outcome measure is a predictive or explanatory score of the outcome of the promotion (see again col 1, ln 22-27; maximizing profit or demand or for a variety of other objectives. See also col 24, ln 49-63; predict promotional effects and actual sales volume. See also col 27, ln 53-61; predict demand); wherein the interface module on said server automatically generates a graphical representation of the forecast having the maximized outcome measure (see col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration); and said server automatically outputs via the network interface the at least one input parameter, at least one optimized parameter, and the graphical representation of the forecast to the client device of the user (see col 7, ln 3-25; the output of the econometric engine is the input of the optimization engine. MDA engine is coupled to the econometric engine and the financial model engine. See also col 8, ln 41-52 and col 40, ln 1-19 & Figs. 23, 65, and 71; imputed econometric variables may be output to other applications. The forecast is displayed by the user interface to the user. Avg price and cost parameters displayed in the promotion response analysis with associated forecasted lifts); the interface module comprising a parameter input interface object presented on the client device with the graphical representation (see col 36, ln 53-63 & col 37, ln 18-34; computer system includes a keyboard and mouse. CPU includes touch-sensitive displays) receiving an adjustment to at least one of the input parameters or at least one of the optimized parameters from the client device of the user via the network interface (see col 44, ln 43-col 45, ln 33 & Fig. 28; an adjusting factor may be the difference between pre- and post- optimization retail prices. Forecasted profits may be compared to pre-optimization profits, and raw profit benefit may be adjusted); WU does not specifically disclose, but ABE discloses, and adjusting the machine learning model with the adjustment to the at least one of the input parameters or the at least one of the optimized parameters (see ¶[0099]-[0101]; use batch reinforcement learning and reformulate value iteration as a supervised learning problem. Characterize inputs states and actions on subsequent iterations to recalculate target values). WU further discloses wherein the forecasting module automatically determines, in real time (see col 36, ln 64-col 37, ln 17; a computer system with processors and video displays), an adjusted outcome measure of the forecast for the promotion by applying the adjustment to the machine learning model (see col 46, ln 61-col 47, ln 6; raw benefit may be based on promo exclusion rules configured by the user via the user interface), and wherein the interface module automatically generates an adjusted graphical representation of the forecast having the adjusted outcome measure (see again col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration), automatically sends the adjusted graphical representation to the client device via said network interface (see col 7, ln 3-25; the output of the econometric engine is the input of the optimization engine. MDA engine is coupled to the econometric engine and the financial model engine. See also col 8, ln 41-52 and col 40, ln 1-19 & Figs. 23, 65, and 71; imputed econometric variables may be output to other applications. The forecast is displayed by the user interface to the user. Avg price and cost parameters displayed in the promotion response analysis with associated forecasted lifts); and wherein said client device displays the adjusted graphical representation to the user (see again col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration) in real-time (see col 64, ln 56-63; the promotion response analysis uses the coefficients and multipliers created during the most recent modeling run. See also col 36, ln 64-col 37, ln 17; a computer system with processors and video displays). WU does not explicitly disclose, but TARIQ discloses, wherein the machine learning model is a neural network machine learning model (see abstract and ¶[0002]-[0004]; a forecast using an artificial neural network. Utilize statistical machine learning techniques using an artificial neural network to predict a set of data metrics), wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises changing weighting given to at least one of the input parameters or the optimized parameters by the machine learning model (see ¶[0002]; artificial neural networks include sets of adaptive weights such as numerical parameters that are adjusted by a learning algorithm). WU discloses systems and methods for price and promotion response analysis that is implemented using a regression model and optimizing algorithm via a computer system that functions over a network (see col 3, ln 4-10 & Figs. 7-9). TARIQ discloses computing data metrics using neural networks that makes predictions using marketing data that is implemented on a cloud system with servers and client devices. It would have been obvious to include the neural network and cloud system as taught by TARIQ in the system executing the method of WU with the motivation to analyze promotion responses via a computer system. WU discloses systems and methods for price and promotion response analysis that is implemented using a regression model and optimizing algorithm via a computer system that functions over a network (see col 3, ln 4-10 & Figs. 7-9) to maximize profits (see col 1, ln 22-27). ABE discloses decision making for customer relationship management that includes the use of supervised reinforcement learning to maximize profits with respect to promotions (see ¶[0008]-[0009]. It would have been obvious to include supervised reinforcement learning as taught by ABE in the system executing the method of WU with the motivation to maximize profits for promotional campaigns. Claim 9 (Previously Presented) The combination of WU, TARIQ, and ABE discloses the system as set forth in claim 8. WU additionally discloses wherein the interface module further receives via the client device a subsequent adjustment to at least one of the input parameters or at least one of the optimized parameters from the user (see col 44, ln 43-col 45, ln 33 & Fig. 28; an adjusting factor may be the difference between pre- and post- optimization retail prices. Forecasted profits may be compared to pre-optimization profits, and raw profit benefit may be adjusted), and wherein the forecasting module further automatically determines a subsequent adjusted outcome measure of the forecast for the promotion by applying the subsequent adjustment to the machine learning model (see col 46, ln 61-col 47, ln 6; raw benefit may be based on promo exclusion rules configured by the user via the user interface), the interface module automatically generating a subsequent adjusted graphical representation of forecast having the subsequent adjusted outcome measure (see again col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration); and displaying the subsequent adjusted graphical representation to the user on the client device (see again col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration). Claim 11 (Original) The combination of WU, TARIQ, and ABE discloses the system as set forth in claim 8. WU further discloses wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises changing a value of at least one of the input parameters or the optimized parameters (see col 44, ln 43-col 45, ln 33 & Fig. 28; an adjusting factor may be the difference between pre- and post- optimization retail prices. Forecasted profits may be compared to pre-optimization profits, and raw profit benefit may be adjusted. See also col 46, ln 61-col 47, ln 6; raw benefit may be based on promo exclusion rules configured by the user via the user interface). Claim 17 (Previously Presented) The combination of WU,TARIQ, and ABE discloses the method as set forth in claim 1. WU does not explicitly disclose, but TARIQ discloses, wherein applying the adjustment to the machine learning model comprises re-training the machine learning model (see ¶[0008] and [0033]; the artificial neural network can be trained using historical data). WU discloses systems and methods for price and promotion response analysis that is implemented using a regression model and optimizing algorithm via a computer system that functions over a network (see col 3, ln 4-10 & Figs. 7-9). TARIQ discloses computing data metrics using neural networks that makes predictions using marketing data that is implemented on a cloud system with servers and client devices. It would have been obvious to include the neural network and cloud system as taught by TARIQ in the system executing the method of WU with the motivation to analyze promotion responses via a computer system. Claim 18 (Previously Presented) The combination of WU,TARIQ, and ABE discloses The system of claim 8, WU does not explicitly disclose, but TARIQ discloses, comprising re-training the machine learning model with the adjustment received from the client device (see ¶[0008] and [0033]; the artificial neural network can be trained using historical data). WU discloses systems and methods for price and promotion response analysis that is implemented using a regression model and optimizing algorithm via a computer system that functions over a network (see col 3, ln 4-10 & Figs. 7-9). TARIQ discloses computing data metrics using neural networks that makes predictions using marketing data that is implemented on a cloud system with servers and client devices. It would have been obvious to include the neural network and cloud system as taught by TARIQ in the system executing the method of WU with the motivation to analyze promotion responses via a computer system. Claim 19 (Previously Presented) The combination of WU,TARIQ, and ABE discloses the method as set forth in claim 1. WU further discloses wherein displaying the adjusted graphical representation to the user comprises displaying the adjusted graphical representation with an artifact of a graph of the graphical representation of the forecast having the maximized outcome measure (see col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration. Examiner Note: a graphical display of forecasts inherently includes the maximum value in that forecast). Claim 20 (Previously Presented) The combination of WU,TARIQ, and ABE discloses the method as set forth in claim 2. WU further discloses wherein displaying the subsequent adjusted graphical representation to the user comprises displaying the subsequent adjusted graphical representation with one or more of (i) an artifact of a graph of the graphical representation of the forecast having the maximized outcome measure and (ii) an artifact of a graph displaying the adjusted graphical representation (see col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration. Examiner Note: a graphical display of forecasts inherently includes the maximum value in that forecast). Claim 21 (Previously Presented) The combination of WU,TARIQ, and ABE discloses The system of claim 8, WU further discloses wherein the displayed adjusted graphical representation of the forecast having the adjusted outcome measure comprises both the adjusted graphical representation and an artifact of the graphical representation of the forecast having the maximized outcome measure (see col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration. Examiner Note: a graphical display of forecasts inherently includes the maximum value in that forecast). Claim 22 (Previously Presented) The combination of WU,TARIQ, and ABE discloses the system as set forth in claim 9. WU further discloses wherein displaying the subsequent adjusted graphical representation of forecast having the subsequent adjusted outcome measure comprises displaying the subsequent adjusted graphical representation with one or more of (i) an artifact of a graph of the graphical representation of the forecast having the maximized outcome measure and (ii) an artifact of a graph displaying the adjusted graphical representation of the forecast having the adjusted outcome measure (see col 40, ln 1-9 and Fig. 64; generate a graphical representation of forecasts according to report configuration. Examiner Note: a graphical display of forecasts inherently includes the maximum value in that forecast). Claims 3 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 8,010,404 B1 to WU et al. in view of US 20170220947 A1 to TARIQ and US 20040015386 A1 to ABE et al. as applied to claim 1 above, and further in view of US 6,928,398 B1 to Fang et al. (hereinafter ‘FANG’). Claim 3 (Original) The combination of WU, TARIQ, and ABE discloses the method as set forth in claim 1. The combination of WU, TARIQ, and ABE does not explicitly disclose, but FANG discloses, wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises removal of at least one of the input parameters or the optimized parameters (see col 10, ln 21-33; model is modified by deleting the insignificant parameters). WU discloses a system and method for price and promotion response analysis that includes optimization through fine tuning of a model (see col 70, ln 11-36). FANG discloses a system and method for building a time series model that includes modifying the model through phases that include deleting insignificant parameters. It would have been obvious to delete insignificant parameters as taught by FANG in the system executing the method of WU with the motivation to fine tune an optimization process. Claim 10 (Original) The combination of WU, TARIQ, and ABE discloses the system as set forth in claim 8. The combination of WU, TARIQ, and ABE does not explicitly disclose, but FANG discloses, wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises removal of at least one of the input parameters or the optimized parameters (see col 10, ln 21-33; model is modified by deleting the insignificant parameters). WU discloses a system and method for price and promotion response analysis that includes optimization through fine tuning of a model (see col 70, ln 11-36). FANG discloses a system and method for building a time series model that includes modifying the model through phases that include deleting insignificant parameters. It would have been obvious to delete insignificant parameters as taught by FANG in the system executing the method of WU with the motivation to fine tune an optimization process. Claim 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 8,010,404 B1 to WU et al. in view of US 20170220947 A1 to TARIQ and US 20040015386 A1 to ABE et al. as applied to claim 1 above, and further in view of US 8,694,339 B1 to Bunick et al. (hereinafter ‘BUNICK’). Claim 12 (Original) The combination of WU, TARIQ, and ABE discloses the system as set forth in claim 8. The combination of WU, TARIQ, and ABE does not specifically disclose, but BUNICK discloses, wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises changing weighting given to at least one of the input parameters or the optimized parameters by the machine learning model (see col 20, ln 36-47 and Fig. 18; if the user does not desire to alter the constraints, but rather desires to adjust the weights applied to the optimization model, the method loops back to step 1820 to adjust the weights). WU discloses a system and method for price and promotion response analysis that includes optimization through fine tuning of a model (see col 70, ln 11-36). BUNICK discloses modeling that includes adjusting weights applied to the model. It would have been obvious for one of ordinary skill in the art at the time of invention to include adjustment of weights as taught by BUNICK in the system executing the method of WU with the motivation to fine tune a model. Claims 6 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 8,010,404 B1 to WU et al. in view of US 20170220947 A1 to TARIQ and US 20040015386 A1 to ABE et al. as applied to claim 1 above, and further in view of US 2015/0081392 A1 to Fox et al. (hereinafter ‘FOX’). Claim 6 (Original) The combination of WU, TARIQ, and ABE discloses the method as set forth in claim 1. The combination of WU, TARIQ, and ABE does not specifically disclose, but FOX discloses, wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises filtering possible states of the input parameters or the optimized parameters (see ¶[0057]; correct for filters with a Kalman filter to update model parameters). WU discloses a system and method for price and promotion response analysis that includes optimization through fine tuning of a model (see col 70, ln 11-36). FOX discloses a competitor prediction tool that includes a Kalman filter to update model parameters in light of prediction errors. It would have been obvious to include the Kalman filter as taught by FOX in the system executing the method of WU with the motivation to fine tune an optimization model. Claim 13 (Original) The combination of WU, TARIQ, and ABE discloses the system as set forth in claim 8. The combination of WU, TARIQ, and ABE does not specifically disclose, but FOX discloses, wherein the adjustment to the at least one of the input parameters or the at least one of the optimized parameters comprises filtering possible states of the input parameters or the optimized parameters (see ¶[0057]; correct for filters with a Kalman filter to update model parameters). WU discloses a system and method for price and promotion response analysis that includes optimization through fine tuning of a model (see col 70, ln 11-36). FOX discloses a competitor prediction tool that includes a Kalman filter to update model parameters in light of prediction errors. It would have been obvious to include the Kalman filter as taught by FOX in the system executing the method of WU with the motivation to fine tune an optimization model. Claims 15 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 8,010,404 B1 to WU et al. in view of US 20170220947 A1 to TARIQ and US 20040015386 A1 to ABE et al. as applied to claim 1 above, and further in view of US 20160071117 A1 to Duncan (hereinafter ‘DUNCAN’). Claim 15 (Previously Presented) The combination of WU, TARIQ, and ABE discloses the method as set forth in claim 1. The combination of WU, TARIQ, and ABE does not specifically disclose, but DUNCAN discloses, wherein said training or instantiation relies on unsupervised learning techniques (see ¶[0129], [0132], and [0146]; use unsupervised modeling and learning to discover topics). WU discloses systems and methods for price and promotion response analysis that is implemented using a regression model and optimizing algorithm via a computer system that functions over a network (see col 3, ln 4-10 & Figs. 7-9). DUNCAN discloses unsupervised learning techniques for modeling to allow for greater flexibility (see ¶[0129]). It would have been obvious to include unsupervised learning as taught by DUNCAN in the system executing the method of WU with the motivation to allow for greater flexibility in modeling and more model features. Claim 16 (Previously Presented) The combination of WU, TARIQ, and ABE discloses the system as set forth in claim 8. The combination of WU, TARIQ, and ABE does not specifically disclose, but DUNCAN discloses, wherein said training or instantiation relies on unsupervised learning techniques (see ¶[0129], [0132], and [0146]; use unsupervised modeling and learning to discover topics). WU discloses systems and methods for price and promotion response analysis that is implemented using a regression model and optimizing algorithm via a computer system that functions over a network (see col 3, ln 4-10 & Figs. 7-9). DUNCAN discloses unsupervised learning techniques for modeling to allow for greater flexibility (see ¶[0129]). It would have been obvious to include unsupervised learning as taught by DUNCAN in the system executing the method of WU with the motivation to allow for greater flexibility in modeling and more model features. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD N SCHEUNEMANN whose telephone number is (571)270-7947. The examiner can normally be reached M-F 9am-5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at 571-270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD N SCHEUNEMANN/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Sep 23, 2019
Application Filed
Mar 08, 2021
Non-Final Rejection — §101, §103
Sep 09, 2021
Response Filed
Oct 25, 2021
Final Rejection — §101, §103
Apr 29, 2022
Request for Continued Examination
May 08, 2022
Response after Non-Final Action
Jun 02, 2022
Non-Final Rejection — §101, §103
Sep 02, 2022
Response Filed
Sep 16, 2022
Final Rejection — §101, §103
Nov 25, 2022
Interview Requested
Dec 05, 2022
Response after Non-Final Action
Dec 13, 2022
Response after Non-Final Action
Jan 23, 2023
Request for Continued Examination
Jan 25, 2023
Response after Non-Final Action
Mar 08, 2023
Non-Final Rejection — §101, §103
Jun 09, 2023
Response Filed
Jul 18, 2023
Final Rejection — §101, §103
Oct 24, 2023
Notice of Allowance
Oct 24, 2023
Response after Non-Final Action
Nov 22, 2023
Response after Non-Final Action
Feb 29, 2024
Response after Non-Final Action
Mar 08, 2024
Response after Non-Final Action
May 01, 2024
Response after Non-Final Action
Jul 05, 2024
Response after Non-Final Action
Jul 05, 2024
Response after Non-Final Action
Jul 08, 2024
Response after Non-Final Action
Jul 08, 2024
Response after Non-Final Action
Aug 22, 2024
Request for Continued Examination
Aug 23, 2024
Response after Non-Final Action
Aug 23, 2024
Response after Non-Final Action
Sep 05, 2024
Non-Final Rejection — §101, §103
Sep 20, 2024
Response Filed
Sep 24, 2024
Final Rejection — §101, §103
Oct 07, 2024
Notice of Allowance
Oct 23, 2024
Response after Non-Final Action
Nov 06, 2024
Response after Non-Final Action
Dec 20, 2024
Response after Non-Final Action
Jan 27, 2025
Response after Non-Final Action
Jan 27, 2025
Response after Non-Final Action
Jan 28, 2025
Response after Non-Final Action
Jan 28, 2025
Response after Non-Final Action
Oct 20, 2025
Response after Non-Final Action
Dec 22, 2025
Request for Continued Examination
Jan 14, 2026
Non-Final Rejection — §101, §103
Jan 26, 2026
Response after Non-Final Action
Apr 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579549
PLATFORM FOR FACILITATING AN AUTOMATED IT AUDIT
2y 5m to grant Granted Mar 17, 2026
Patent 12535999
A METHOD FOR EXECUTION OF A MACHINE LEARNING MODEL ON MEMORY RESTRICTED INDUSTRIAL DEVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12033094
AUTOMATIC GENERATION OF TASKS AND RETRAINING MACHINE LEARNING MODULES TO GENERATE TASKS BASED ON FEEDBACK FOR THE GENERATED TASKS
2y 5m to grant Granted Jul 09, 2024
Patent 12026624
System and Method For Loss Function Metalearning For Faster, More Accurate Training, and Smaller Datasets
2y 5m to grant Granted Jul 02, 2024
Patent 11836746
AUTO-ENCODER ENHANCED SELF-DIAGNOSTIC COMPONENTS FOR MODEL MONITORING
2y 5m to grant Granted Dec 05, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
6%
Grant Probability
15%
With Interview (+8.4%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 551 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month