Prosecution Insights
Last updated: April 19, 2026
Application No. 18/201,958

DEMAND FORECASTING TOOL FOR FOOD ESTABLISHMENTS

Non-Final OA §101§103
Filed
May 25, 2023
Examiner
TORRES CHANZA, GABRIEL JOSE
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
SAP SE
OA Round
3 (Non-Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 4 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
34 currently pending
Career history
38
Total Applications
across all art units

Statute-Specific Performance

§101
38.4%
-1.6% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/18/2025 has been entered. Status of Claims This communication is a Non-Final Office Action in response to Applicant’s Request for Continued Examination (“RCE”) for application number 18/201,958 received on 12/18/2025. Claims 1, 4, 5, 7-15, 17, 18, and 20-25 are currently pending and have been examined. Response to Amendment The amendment filed on 12/18/2025 has been entered. Response to Arguments Response to §101 arguments – But for the arguments below, Applicant’s arguments regarding the §101 rejections previously applied (remarks at pgs. 10-14) are primarily raised in support of the amendments to the claims, which are believed to be fully addressed in the updated §101 rejections section below. Furthermore, the following arguments have been considered and are unpersuasive. Applicant argues (remarks at pgs. 11-12) – “On page 5 of the Advisory Action, the following is asserted with respect to parts of the claim as considered by the Office: "For example, given the scope recited in the claim limitations, which under BRI, can be reasonably interpreted as 2 sets of one or more data points, one of ordinary skill in the art would be able to use mathematical modeling to process 2 data sets with the help of pen and paper in order to generate a prediction. Similarly, one of ordinary skill in the art could reasonably generate vector representations for small data sets using various vectorization techniques (e.g., bag-of-words)." Applicant respectfully submits that the above reasoning cannot be maintained with respect to the amended independent claims. The human mind is simply not equipped to perform the recited operations. For example, given the nature and scope of the specification, as well as the ordinary and customary meanings of the claim elements, a skilled person would clearly understand that the passing of hidden state vectors between respective time steps in a time series cannot be performed by a human. Even with the aid of pen and paper, the skilled person would appreciate that a computer system is needed to perform the recursive, high-dimensional computations needed for processing a time series through hidden state vector passing to yield a vector representation. For a time series of even modest length, the recited claim operations would typically require a massive amount of computations. A human cannot practically do this to achieve the claimed result (generating the vector representation). Instead, these operations are inherent to computer processing. Accordingly, independent claims 1, 15, and 18 do not recite an abstract idea and are eligible at least under Step 2A, Prong One, of the USPTO's eligibility analysis. The same comments apply to the dependent claims at least because they depend from these independent claims. Applicant respectfully requests reconsideration and withdrawal of the rejections under 35 U.S.C. § 101 on this basis.”. In response, Examiner notes that using an RNN to process time series data by passing hidden state vectors between the respective time steps in the time series to determine the first vector representation doesn’t integrate the abstract idea into a practical application, or otherwise add significantly more than the abstract idea because it amounts to well-understood, routine, and conventional prior art activity (See, e.g., O’Donoghue et al. (US 20230119186 A1) at pars. [0070], [0322], and [0323]). Applicant further argues (remarks at pg. 13) – “On page 7, the Advisory Action asserts that the claims do not recite a "new machine learning architecture" or "anything beyond additional elements that are ubiquitous in machine learning and computing." Again, the Advisory Action makes a high-level statement regarding individual components being "ubiquitous," but does not address the specific combination of components claimed. In other words, save for indicating that individual elements of the claimed multi-input machine learning model exist, the Office has not explained why the combination of elements (as opposed to, for example, an RNN in isolation) fails to improve technology or a technical field.”. In response, Examiner respectfully disagrees and notes that as noted in par. [0007] of the Advisory Action dated 12/05/2025, in pars. [0019-0025] of the Office Action dated 09/23/2025, as well as in the §101 rejections of the instant office action, the ordered combination of elements in the claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation and well-understood, routine and conventional activity. Accordingly, the subject matter encompassed by the claims fails to amount to significantly more than the abstract idea itself, or otherwise improve technology or a technical field. Applicant further argues (remarks at pg. 14) – “Through the claimed architecture and operations, the invention improves technology or a technical field. For example, the claimed invention allows a machine learning model processing food establishment data to handle, separately and in parallel, observed periods and prediction periods, as discussed above. Accordingly, even if, arguendo, the amended independent claims are deemed to recite an abstract idea, the amended claims clearly integrate any alleged judicial exception that may be present in the claims into a practical application.”. In response, Examiner respectfully disagrees and notes that as currently recited, the claims rely on generic computing component and instructions, as well as on well-understood, routine, and conventional elements and activity to perform the abstract idea, which fails to improve technology or a technical field. See updated §101 rejections below for more details. Accordingly, the §101 rejections are maintained. Response to §103 arguments – Applicant’s arguments regarding the §103 rejections previously applied (remarks at pgs. 14-17) are primarily raised in support of the amendments to the claims, which are believed to be fully addressed in the updated §103 rejections section below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4, 5, 7-15, 17, 18, and 20-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of those findings is provided below, as further set forth in MPEP 2106. Step 1: The claimed invention is analyzed to determine if it falls outside one of the four statutory categories of invention. See MPEP 2106.03 Claims 1, 4-5, and 7-14 are directed to a system (i.e., Machine), claims 15, 17, and 21-22 are directed to a method (i.e., Process), and claims 18, 20, and 23-25 are directed to a computer-readable medium (i.e., Manufacture). Therefore, claims 1, 4-5, 7-15, 17-18, and 20-25 are directed to patent eligible categories of invention. Accordingly, the claims satisfy Step 1 of the eligibility inquiry. Step 2A, Prong 1: In prong one of step 2A, the claim(s) is/are analyzed to evaluate whether they recite a judicial exception. See MPEP 2106.04 Independent claim 1 recites a system for predicting demand of food items. As drafted, the limitations recited by claim 1 fall under the “Mental Processes” abstract idea grouping by setting forth activities that could be performed mentally by a human (including an observation, evaluation, judgment, opinion). Claim 1 recites a system comprising a memory, and one or more processors with limitations for: enriching first user data of a food establishment to obtain a first input data set, the first user data comprising a sequence of first data points defining a time series representing at least changes in demand over time for an observed period, each first data point comprising, for a respective time step in the time series, an observed value of a target variable and a value of each of a plurality of establishment input features, the target variable being related to a food item, and the first user data being enriched, for each first data point, using values of a plurality of complementary features corresponding to the first data point; (The step for “enriching first user data” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper.); enriching second user data of the food establishment to obtain a second input data set, the second user data comprising a second data point for a first prediction period, the second data point comprising a value of each of the establishment input features, and the second user data being enriched using values of the plurality of complementary features corresponding to the second data point; (The step for “enriching second user data” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper.); generating, by a multi-input machine learning model, a first predicted value of the target variable for the first prediction period, the generating of the first predicted value comprising: (The step for “generating a first predicted value” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper.); processing, by a first component of the multi-input machine learning model comprising a Recurrent Neural Network (RNN), the first input data set to generate a first vector representation for the observed period, processing of the first input data set comprising processing, by the RNN, the time series by passing hidden state vectors between the respective time steps in the time series to determine the first vector representation, (The step for “processing the first input data” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper.); processing, by a second component of the multi-input machine learning model comprising one or more dense layers, the second input data set to generate a second vector representation for the first prediction period, the first component and the second component separately processing the first input data set and the second input data set, respectively; (But for the additional elements recited in the claim limitation – underlined – to be analyzed under steps 2A, prong 2, and 2B, the step for “processing the second input data” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper.); concatenating, by a third component of the multi-input machine learning model, the first vector representation and the second vector representation to generate at least one concatenated vector, and (But for the additional elements recited in the claim limitation – underlined – to be analyzed under steps 2A, prong 2, and 2B, the step for “concatenating the first vector representation and the second vector representation to generate at least one concatenated vector” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper.); processing, by a fourth component of the multi-input machine learning model, the at least one concatenated vector to generate the first predicted value for the first prediction period; (But for the additional elements recited in the claim limitation – underlined – to be analyzed under steps 2A, prong 2, and 2B, the step for “processing the at least one concatenated vector” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper.); generating, by the multi-input machine learning model, a plurality of further predicted values of the target variable, the generating of the plurality of further predicted values including using the first predicted value as input to generate a second predicted value of the target variable for a second prediction period that follows the first prediction period; (The step for “generating a plurality of further predicted values of the target variable” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper.); and updating, in at least one database, order data associated with an order system of the food establishment, based on output data including the first predicted value and the plurality of further predicted values of the target variable. (The step for “updating order data” can be accomplished mentally such as via human observation, evaluation, judgement, or with the help of pen and paper. Additionally, when considered as an additional element, the “updating” step amounts to insignificant extra-solution activity as insignificant application.); Independent claims 15 and 18 recite a method and a computer-readable medium with limitations that are substantially similar to those recited by independent claim 1. Therefore, the same analysis applies. The additional elements beyond the abstract idea for consideration under Step 2A, Prong 2, and Step 2B recited by independent claims 1, 15, and 18 are: memory, one or more processors, multi-input machine learning model, Recurrent Neural Network (RNN), one or more dense layers, and database. Similarly, the step for updating will be treated as an additional element in the analysis. Dependent claims 4-5, 7-14, 17, and 20-25 further narrow the abstract idea and introduce the following additional elements for consideration under said steps: feedforward network, one or more long short-term memory (LSTM) layers, training data sets, auto-enrichment function, online data aggregator component. Step 2A, Prong 2: An evaluation is made whether a claim recites any additional element, or combination of additional elements, that integrate the judicial exception into a practical application of the exception. See MPEP 2106.04(d). Regarding the computing additional elements, namely memory, one or more processors, and database, these additional elements have been evaluated but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or equivalent), which merely serves to link the use of the judicial exception to a particular technological environment (generic computing environment). See MPEP 2106.05(f) and 2106.05(h). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. With respect to the limitations for processing, by a first component of the multi-input machine learning model comprising a Recurrent Neural Network (RNN), processing, by the RNN, the time series by passing hidden state vectors between the respective time steps in the time series, processing, by a second component of the multi-input machine learning model comprising one or more dense layers, concatenating, by a third component of the multi-input machine learning model, processing, by a fourth component of the multi-input machine learning model, generating, by the multi-input machine learning model, wherein the second component comprises a feedforward network, wherein the RNN of the first component comprises one or more long short-term memory (LSTM) layers, wherein the multi-input machine learning model is trained on a linked sequence of training data sets, each training data set in the linked sequence of training data sets comprising training data covering a respective training data period, and wherein enriching the first user data comprises invoking an auto-enrichment function of an online data aggregator component, these limitations fail to integrate the abstract idea into a practical application because the provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. Step 2B: The claims are analyzed to determine whether any additional element, or combination of additional elements, is/are sufficient to ensure that the claims amount to significantly more than the judicial exception. This analysis is also termed a search for "inventive concept." See MPEP 2106.05. Regarding the computing additional elements, namely memory, one or more processors, and database, these additional element(s) has/have been evaluated, but fail to add significantly more to the claims because they amount to using generic computing elements (computer hardware) or instructions/software (engine) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment (network computing environment, the internet, online) and does not amount to significantly more than the abstract idea itself. Applicant’s specification recites the computing additional elements at a high level of generality. With respect to the limitations for processing, by a first component of the multi-input machine learning model comprising a Recurrent Neural Network (RNN), processing, by the RNN, the time series by passing hidden state vectors between the respective time steps in the time series, processing, by a second component of the multi-input machine learning model comprising one or more dense layers, concatenating, by a third component of the multi-input machine learning model, processing, by a fourth component of the multi-input machine learning model, generating, by the multi-input machine learning model, wherein the second component comprises a feedforward network, wherein the RNN of the first component comprises one or more long short-term memory (LSTM) layers, wherein the multi-input machine learning model is trained on a linked sequence of training data sets, each training data set in the linked sequence of training data sets comprising training data covering a respective training data period, and wherein enriching the first user data comprises invoking an auto-enrichment function of an online data aggregator component, these limitations fail to add significantly more to the abstract idea because the provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Therefore, the additional elements merely describe generic computing elements or computer-executable instructions (software) merely serve to tie the abstract idea to a particular operating environment, which does not add significantly more to the abstract idea. See, e.g., Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976; Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Moreover, with respect to using a RNN to process time series data by passing hidden state vectors between the respective time steps in the time series, it is noted that employing a RNN to process data by passing hidden state vectors between the respective time steps in the time series amounts to well-understood, routine, and conventional prior art activity, which fails to add significantly more to the claims. See, e.g., O’Donoghue et al. (US 20230119186 A1): [0070] For example, example embodiments of the present disclosure encode all available datasets (both structured and unstructured) into time series for predictive modeling. [0322] the recurrent neural network may include two or more conventional recurrent neural network machine learning models, two or more long short term memory neural networks, two or more gated recurrent units, and/or the like.; [0323] In some embodiments, the two or more recurrent neural network machine learning models of the machine learning framework may include a state processing recurrent neural network machine learning model and an attribute processing machine learning model. In some embodiments, a state processing recurrent neural network machine learning model may be a recurrent neural network machine learning model that is configured to process an event encoding data object to generate a state-level attention weight value for the event data object. In some embodiments, the state processing event recurrent neural network machine learning model is configured to process an event encoding data object to generate a hidden state vector for the encoded event data object, then process the hidden state vector. Furthermore, even if the updating step is interpreted as additional elements, this activity at most amounts to insignificant extra-solution activity, which does not add significantly more to the abstract idea, as noted in MPEP 2106.05(g). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. The ordered combination of elements in the claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea itself. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-5, 7, 11-15, 18, and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Mimassi (US 20220198586 A1, hereinafter “Mimassi”), in view of Lee et al. (US 20220358366 A1, hereinafter “Lee”), in further view of Kivatinos et al. (US 20200034707 A1, hereinafter “Kivatinos”), in further view of O’Donoghue et al. (US 20230119186 A1, hereinafter “O’Donoghue”). Regarding claims 1/15/18: Mimassi teaches a system ([Abstract] A system and method for image-based personalized food item search, design, and culinary fulfillment), comprising a memory that stores instructions; ([0011] engine comprising a first plurality of programming instructions stored in the memory which… a prediction engine comprising a second plurality of programming instructions stored in the memory; [0114] Memory 25 may be random-access memory having any structure and architecture known in the art, for use by processors 21, for example to run software.), and one or more processors configured by the instructions to perform operations ([0114] Computing device 20 includes processors 21 that may run software that carry out one or more functions or applications of aspects, such as for example a client application 24. Processors 21 may carry out computing instructions under control of an operating system 22 such as, for example, a version of MICROSOFT WINDOWS™ operating system, APPLE macOS™ or iOS™ operating systems, some variety of the Linux operating system, ANDROID™ operating system, or the like.), a method ([Abstract] A system and method for image-based personalized food item search, design, and culinary fulfillment), and computer-readable medium ([0113] at least some network device aspects may include non-transitory machine-readable storage media) with limitations comprising: enriching first user data of a food establishment to obtain a first input data set, ([0055] A culinary fulfilment engine 300 then determines the patron's location by querying the patron's mobile device for location information (e.g., provided by the mobile device's GPS hardware, Wi-Fi location applications, etc.) and gathers information from external resources 180.; Fig. 1: External Resources (nutrition data, health data, rating data, maps, etc.) 180. For example, system may access a publicly-available mapping website such as Google maps.; [0055] The patron may further enter additional food item preferences and a destination or select a pre-entered destination presented from the patron's preferences through patron real-time update engine 211, which will allow the system to better customize its restaurant suggestions. One of ordinary skill in the art would reasonably consider the patron as a first user of the system.); the first user data comprising a sequence of first data points defining a time series representing at least changes in demand over time for an observed period, each first data point comprising, for a respective time step in the time series, an observed value of a target variable and a value of each of a plurality of establishment input features, the target variable being related to a food item, and the first user data being enriched, for each first data point, using values of a plurality of complementary features corresponding to the first data point; ([0012] patron profile database comprising a plurality of patron profiles, each patron profile comprising: a patron preference; and a patron review for one or more food item recommendations, each food item recommendation comprising a second list of required ingredients and a second required culinary skill; [0024] FIG. 5 is a flow diagram showing the steps of an exemplary method for an optimized food item recipe generation process based on a particular patron current food preferences, historical culinary transactions, current geographic location, and the restaurant's ingredients on hand and culinary skills.; [0055The patron may further enter additional food item preferences and a destination or select a pre-entered destination presented from the patron's preferences through patron real-time update engine 211; A culinary fulfilment engine 300 then determines the patron's location by querying the patron's mobile device for location information and gathers information from external resources 180 about restaurant options located nearby and along the route from the patron's currently location to the patron's destination, as well as traffic information related to the patron's location, intended route, and identified restaurant options.; Fig. 2: Patron Culinary Transactions 213); [0091] An analysis (as further exemplified in FIG. 5) is performed on patrons historical and real-time food item requirements and compared to menu options and culinary capabilities of restaurants in proximity of patron 404 from which a consumer specific food item is generated 405. Examiner notes that one of ordinary skill in the art would reasonably interpret historical culinary transactions, as disclosed by Mimassi, as equivalent to the time series comprising observed values at different time steps, as disclosed in Applicant’s claim.); enriching second user data of the food establishment to obtain a second input data set, (Fig. 1: External Resources (nutrition data, health data, rating data, maps, etc.) 180; [0054] restaurants may connect to restaurant portal 140 to enter information about the restaurant and its menu. The system may be able to determine certain restaurant information by accessing external resources 180 such as mapping websites and applications. For example, system may access a publicly-available mapping website such as Google maps, which may contain information about the restaurant's name, location, types of food offered, hours of operation, phone number, etc. Thus, in some aspects, it is not necessary for the restaurant to enter certain information through portal, as the information may be automatically obtained from external resources 180. One of ordinary skill in the art would reasonably consider the restaurant as a second user of the system.); the second user data comprising a second data point for a first prediction period, the second data point comprising a value of each of the establishment input features, and the second user data being enriched using values of the plurality of complementary features corresponding to the second data point; ([0054] Likewise, restaurants may connect to restaurant portal 140 to enter information about the restaurant and its menu. Examples of the types of information that a restaurant may enter include, but are not limited to: restaurant name, location, types of food offered, hours of operation, phone number, specific menu offerings, food preparation times for certain dishes (including adjustments to food preparation times during busy periods for the restaurant), prices, calorie counts, ingredients, side dishes, drinks, and special pricing options like daily “happy hour” specials or seasonal offerings. In some aspects, the system may be able to determine certain restaurant information by accessing external resources 180 such as mapping websites and applications.); generating, by a multi-input machine learning model, a first predicted value of the target variable ([0061] a recipe generator engine 214 receives the patron's current food item requirements from a patron real time update engine 211 along with a patron profile 213. A recipe generation engine 214 obtains restaurant ingredient data 215 and restaurant recipe data 216 for one or more restaurants either from a database 150 or from external resources 180. A recipe generation engine 214 then uses machine learning algorithms to create a personalized food item optimized to meet the patron preferences and outcomes. See also Fig. 2, which depicts multiple inputs being fed into recipe generator engine 214.); for the first prediction period, ([0057] In some aspects, culinary fulfilment engine 300, through restaurant portal 140, may also provide information to the restaurant to schedule the restaurant's food preparation activities to coordinate with the patron's arrival. One of ordinary skill in the art would reasonably interpret the patron’s arrival time as the prediction period, as supported by Applicant’s own specification, where in [0021] discloses “The prediction period may be a future period for which the target variable is to be predicted, e.g., a next day or a next week.”.); the generating of the first predicted value comprising: processing, by a first component of the multi-input machine learning model comprising a Recurrent Neural Network (RNN), the first input data set to generate a first vector representation for the observed period, ([0011] a first machine learning algorithm configured to identify associations among the patron preferences, the first lists of required ingredients, and the first required culinary skills; [0093] Convert aggregate historical patron food item text documents to corresponding word vectors to represent generalized patron food profile 601 ; [0096] An exemplary recipe optimization method may include deep learning techniques familiar to those skilled in the art. One such form of deep learning that is particularly useful when generating text is Recurrent Neural Networks (“RNN”)).); processing of the first input data set comprising processing, by the RNN, the time series ([0024] FIG. 5 is a flow diagram showing the steps of an exemplary method for an optimized food item recipe generation process based on a particular patron current food preferences, historical culinary transactions, current geographic location, and the restaurant's ingredients on hand and culinary skills.; [0096] The initial input data will cause the model to learn the weights of connections that influence the activity of these gates which will impact the resultant output. To generate unique personalized recipes for a given patron, standard recipes along with the patron profile data are fed into the input gate of the RNN, in turn the RNN will learn what's important to the patron and create unique recipe outputs.); processing, by a second component of the multi-input machine learning model… …the second input data set to generate a second vector representation for the first prediction period, the first component and the second component separately processing the first input data set and the second input data set, respectively; ([0093] Convert restaurant recipe and culinary preparation text documents to corresponding word vectors 602. [0011] second machine learning algorithm configured to recognize and output a target food item); …by a third component of the multi-input machine learning model, … ([0018] the third machine learning algorithm is used construct a prediction model); and processing, by a fourth component of the multi-input machine learning model, the at least one concatenated vector to generate the first predicted value for the first prediction period; ([0067] In operation, recommendation engine 314 will take as inputs a personalized recipe information 241, patron location data 312, traffic data 313, restaurant location data 315, restaurant skill data 316, restaurant review data 317. Using semantic vector space methods familiar to those skilled in the art, the input data is represented as word vector and compared using cosine similarity techniques with the optimized target vector to provide as outputs a culinary preparation information 318 that is used by the restaurant and a patron personalized food item 242 that is displayed to the patron.); generating, by the multi-input machine learning model, a plurality of further predicted values of the target variable, the generating of the plurality of further predicted values including using the first predicted value as input to generate a second predicted value of the target variable for a second prediction period that follows the first prediction period; ([0057] culinary fulfilment engine 300, through restaurant portal 140, may also provide information to the restaurant to schedule the restaurant's food preparation activities to coordinate with the patron's arrival. If the restaurant has entered information such as food preparation times, culinary fulfilment engine 300 may use that information to instruct the restaurant's kitchen staff when to start preparation of the patron's order, such that the order will be ready just prior to arrival of the patron.); and updating, in at least one database, order data associated with an order system of the food establishment, based on output data including the first predicted value and the plurality of further predicted values of the target variable. ([0056] In an aspect, culinary fulfilment server 300 will contact the restaurant through restaurant portal 140 to automatically enter an order into the restaurant's computer 141.; [0057] Such food preparation times and scheduling may be adjusted for busy periods at the restaurant (typically around lunch and dinner) either automatically based on the restaurant's history as stored in a database 150, or by retrieving information stored in a database 150 that has been manually entered by the restaurant through restaurant portal 140.). However, Mimassi doesn’t explicitly teach: by passing hidden state vectors between the respective time steps in the time series to determine the first vector representation, …comprising one or more dense layers, … concatenating, … …the first vector representation and the second vector representation to generate at least one concatenated vector, Lee teaches: …comprising one or more dense layers, … (Fig. 1: Dense Layer 104). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi with Lee’s features listed above. One would’ve been motivated to do so in order to include network elements including one or more inputs, one or more nodes and an output (Lee; [0025]). By incorporating the teachings of Lee, one would’ve been able to use dense layers in the neural network. Mimassi and Lee doesn’t teach: by passing hidden state vectors between the respective time steps in the time series to determine the first vector representation, concatenating, … …the first vector representation and the second vector representation to generate at least one concatenated vector, Kivatinos teaches: concatenating, … …the first vector representation and the second vector representation to generate at least one concatenated vector, ([0035] The encoded fee schedule and encoded set of recent billing claims may be combined, such as by concatenating the two vectors into a single vector.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi and Lee with Kivatinos’ features listed above. One would’ve been motivated to do so in order to use information from both the physician's fee schedule and recent billing claims to make its prediction. Kivatinos; [0035]) By incorporating the teachings of Kivatinos, one would’ve been able to concatenate the 2 vectors in order to produce a recommendation. Mimassi, Lee, and Kivatino doesn’t teach: by passing hidden state vectors between the respective time steps in the time series to determine the first vector representation, O’Donoghue teaches: by passing hidden state vectors between the respective time steps in the time series to determine the first vector representation, ([0323] In some embodiments, the state processing event recurrent neural network machine learning model is configured to process an event encoding data object to generate a hidden state vector for the encoded event data object, then process the hidden state vector in accordance with the parameters (e.g., weights and/or biases) of the state processing recurrent neural network machine learning model to generate a state processing model output for the encoded event data object.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, and Kivatino with O’Donoghue’s features listed above. One would’ve been motivated to do so in order to generate the state-level attention weight value for the encoded event data object based at least in part on the state processing model output for the encoded event data object (O’Donoghue; [0323]) By incorporating the teachings of O’Donoghue, one would’ve been able to determine the first vector representation by passing hidden state vectors between the time steps. Regarding claim 4: Mimassi doesn’t teach: The system of claim 1, wherein the second component comprises a feedforward network. Lee further teaches: The system of claim 1, wherein the second component comprises a feedforward network. (Fig. 1: Neural Network 100). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, & Kivatinos with Lee’s additional features listed above. One would’ve been motivated to do so in order to include organizing a structure of the neural network 100 and “training” the neural network 100 (Lee; [0024]). By incorporating the teachings of Lee, one would’ve been able to use a feedforward neural network in the demand forecasting tool. Regarding claim 5: Mimassi further teaches: wherein the RNN of the first component comprises one or more long short-term memory (LSTM) layers ([0096] One such form of deep learning that is particularly useful when generating text is Recurrent Neural Networks (“RNN”) using long short-term memory (“LSTMs”) units or cells. A single LSTM is comprised of a memory-containing cell, an input gate, an output gate and a forget gate. The input and forget gate determine how much of incoming values transit to the output gate and the activation function of the gates is usually a logistic function. The initial input data will cause the model to learn the weights of connections that influence the activity of these gates which will impact the resultant output. To generate unique personalized recipes for a given patron, standard recipes along with the patron profile data are fed into the input gate of the RNN, in turn the RNN will learn what's important to the patron and create unique recipe outputs.); Regarding claim 7: Mimassi further teaches: wherein the multi-input machine learning model is trained on a linked sequence of training data sets, each training data set in the linked sequence of training data sets comprising training data covering a respective training data period. ([0083] According to some embodiments, machine learning engine 1505 may be configured to use any desirable machine learning techniques to learn or train food item models 1507 using the labeled examples. Examples of machine learning techniques that can be used include, but are not limited to, supervised learning based techniques (e.g., artificial neural networks, Bayesian-based techniques, decision trees, etc.), unsupervised learning based techniques (e.g., data clustering, expectation-maximization algorithms, etc.) reinforcement learning based techniques, deep learning based techniques, and the like. One of ordinary skill in the art would reasonably interpret reinforcement learning as an iterative training process to optimize system performance and improve predictions, done with an ordered sequence of data.) Regarding claims 11/23: Mimassi further teaches: wherein enriching the first user data comprises invoking an auto-enrichment function of an online data aggregator component. ([0054] it is not necessary for the restaurant to enter certain information through portal, as the information may be automatically obtained from external resources 180. [0099] Recommendation engine 314 also receives information from a number of sources to assist with producing a specific recipe recommendation, including (but not limited to) patron location data 312, traffic data 313, restaurant location data 315, restaurant skill data 316 (such as the skills of individual chefs that are working at the time), and restaurant review data 317. This aggregated information may then be used to produce a patron-specific personalized food item 242. Regarding claims 12/24: Mimassi further teaches: wherein the target variable is a number of food items sold by the food establishment. (Fig. 3: Recommendation Engine 314, Patron Personalized Food Item 242. [0067] In operation, recommendation engine 314 will take as inputs a personalized recipe information 241, patron location data 312, traffic data 313, restaurant location data 315, restaurant skill data 316, restaurant review data 317. Using semantic vector space methods familiar to those skilled in the art, the input data is represented as word vector and compared using cosine similarity techniques with the optimized target vector to provide as outputs a culinary preparation information 318 that is used by the restaurant and a patron personalized food item 242 that is displayed to the patron.). Regarding claim 13: Mimassi further teaches: wherein the establishment input features comprise one or more of: date; holiday data; weather data; temperature data; humidity data; establishment type; cuisine type; delivery type; parking availability; establishment geographic area; establishment geographic area income level; establishment rating; peak time; food item price; food item category; or promotion data. ([0099] Recommendation engine 314 also receives information from a number of sources to assist with producing a specific recipe recommendation, including (but not limited to) patron location data 312, traffic data 313, restaurant location data 315, restaurant skill data 316 (such as the skills of individual chefs that are working at the time), and restaurant review data 317. This aggregated information may then be used to produce a patron-specific personalized food item 242, along with a set of culinary instructions for preparing the patron-specific item that may be sent as culinary preparation information 318.) Regarding claim 14: Mimassi wherein the complementary features comprise one or more of: competitor data; online trend data; web search data; location data; or social media data ([0014] According to an aspect of an embodiment, the patron preference is based on social media information retrieved from a social media network.; [0040] Food blogs and social media accounts dedicated to the food industry have proliferated in the past decade. As is especially the case with social media accounts, oftentimes a picture of a food item (e.g., a meal displayed on an Instagram account) may be posted online with little context such as, for example, the name of the dish, the location where the dish was eaten/purchased, and the ingredients required to make the meal. Therefore, if an individual wishes to eat to the meal they see in a picture they must expend energy to search for that missing context, which can be time and resources intensive.). Claims 8-9, 17, 20-21, and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Mimassi (US 20220198586 A1, hereinafter “Mimassi”), in view of Lee et al. (US 20220358366 A1, hereinafter “Lee”), in further view of Kivatinos et al. (US 20200034707 A1, hereinafter “Kivatinos”), in further view of O’Donoghue et al. (US 20230119186 A1, hereinafter “O’Donoghue”) as applied to claims 7, and 15 above, in further view of Quigley et al. (US 20230131603 A1, hereinafter “Quigley”). Regarding claim 8: Mimassi doesn’t teach: enriching third user data of the food establishment to obtain an additional training data set, the third user data comprising a sequence of third data points for an additional training data period, each third data point comprising an observed value of the target variable and a value of each of the establishment input features, and the third user data being enriched, for each third data point, using values of the complementary features corresponding to the third data point; linking the additional training data set to the linked sequence of training data sets to obtain an updated linked sequence of training data sets; and storing the updated linked sequence of training data sets. Lee further teaches: storing the updated linked sequence of training data sets. ([0063] external system 620 may include any number of servers, hosts, systems, and/or databases that store data to be accessed by the system 610. One of ordinary skill in the art would reasonably interpret the sequence of training data sets as data that can be stored to be accessed by the system 610.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, & Kivatinos with Lee’s additional features listed above. One would’ve been motivated to do so in order to store data (Lee; [0063]). By incorporating the teachings of Lee, one would’ve been able to store linked sequence of training data sets. Mimassi, Lee, & Kivatinos doesn’t teach: enriching third user data of the food establishment to obtain an additional training data set, the third user data comprising a sequence of third data points for an additional training data period, each third data point comprising an observed value of the target variable and a value of each of the establishment input features, and the third user data being enriched, for each third data point, using values of the complementary features corresponding to the third data point; linking the additional training data set to the linked sequence of training data sets to obtain an updated linked sequence of training data sets; and Quigley teaches: enriching third user data of the food establishment to obtain an additional training data set, the third user data comprising a sequence of third data points for an additional training data period, each third data point comprising an observed value of the target variable and a value of each of the establishment input features, and the third user data being enriched, for each third data point, using values of the complementary features corresponding to the third data point; [0533] Training can be done using training data, which may be collected or generated for training purposes. [0534] In embodiments, the machine learning system 502 trains a model based on training data. In embodiments, the machine learning system 502 may receive vectors containing user data (e.g., transaction history, preferences, wish list virtual assets, and the like), virtual asset data (e.g., price, color, fabric, and the like), and outcomes (e.g., redemption, exchanges, and the like).; [0550] the analytics system 602 may include one or more analytic agents that are configured to execute a set of processes on collected data to produce an analytic result data structure. Once structured, an analytic agent may query the structured data set with a set of queries (e.g., SQL queries) and may further process the results of the queries (e.g., combine, aggregate, perform statistical analyses, and/or the like) to obtain an analytic result data structure. linking the additional training data set to the linked sequence of training data sets to obtain an updated linked sequence of training data sets; and ([1177] Training a machine-learning model may include supervised learning (for example, based on labelled input data), unsupervised learning, and reinforcement learning. One of ordinary skill in the art would reasonably interpret reinforcement learning as an iterative training process to optimize system performance and improve system predictions, performed with an ordered sequence of data updated over time.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, & Kivatinos with Quigley’s features listed above. One would’ve been motivated to do so in order to improve prediction accuracy, reduce storage space, and increase processing speed (Quigley; [1176]). By incorporating the teachings of Quigley, one would’ve been able to use historical data to train the system. Regarding claim 9: Mimassi further teaches: retraining the multi-input machine learning model… ([0083] Examples of machine learning techniques that can be used include, but are not limited to, supervised learning-based techniques (e.g., artificial neural networks, Bayesian-based techniques, decision trees, etc.), unsupervised learning-based techniques (e.g., data clustering, expectation-maximization algorithms, etc.) reinforcement learning based techniques, deep learning-based techniques, and the like. One of ordinary skill in the art would reasonably interpret reinforcement learning as an approach that includes retraining a model to achieve better performance.). Mimassi doesn’t teach: …on the updated linked sequence of training data sets. Quigley further teaches: …on the updated linked sequence of training data sets. ([1177] Training a machine-learning model may include supervised learning (for example, based on labelled input data), unsupervised learning, and reinforcement learning. One of ordinary skill in the art would reasonably interpret reinforcement learning as an iterative training process to optimize system performance and improve system predictions, performed with an ordered sequence of data updated over time.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, Kivatinos, & Quigley with Quigley’s additional features listed above. One would’ve been motivated to do so in order to improve prediction accuracy, reduce storage space, and increase processing speed (Quigley; [1176]). By incorporating the teachings of Quigley, one would’ve been able to use updated data to train the system. Regarding claim 17/20: Mimassi further teaches: wherein the multi-input machine learning model is trained on a linked sequence of training data sets, each training data set in the linked sequence of training data sets comprising training data covering a respective training data period. ([0083] According to some embodiments, machine learning engine 1505 may be configured to use any desirable machine learning techniques to learn or train food item models 1507 using the labeled examples. Examples of machine learning techniques that can be used include, but are not limited to, supervised learning-based techniques (e.g., artificial neural networks, Bayesian-based techniques, decision trees, etc.), unsupervised learning-based techniques (e.g., data clustering, expectation-maximization algorithms, etc.) reinforcement learning based techniques, deep learning-based techniques, and the like. One of ordinary skill in the art would reasonably interpret reinforcement learning as an iterative training process to optimize system performance and improve predictions, done with an ordered sequence of data.) Mimassi doesn’t teach: enriching third user data of the food establishment to obtain an additional training data set, the third user data comprising a sequence of third data points for an additional training data period, each third data point comprising an observed value of the target variable and a value of each of the establishment input features, and the third user data being enriched, for each third data point, using values of the complementary features corresponding to the third data point; linking the additional training data set to the linked sequence of training data sets to obtain an updated linked sequence of training data sets; and storing the updated linked sequence of training data sets. Lee further teaches: storing the updated linked sequence of training data sets. ([0063] external system 620 may include any number of servers, hosts, systems, and/or databases that store data to be accessed by the system 610. One of ordinary skill in the art would reasonably interpret the sequence of training data sets as data that can be stored to be accessed by the system 610.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, Kivatinos, & Quigley with Lee’s additional features listed above. One would’ve been motivated to do so in order to store data (Lee; [0063]). By incorporating the teachings of Lee, one would’ve been able to store linked sequence of training data sets. Mimassi, Lee, Kivatinos, & Quigley doesn’t teach: enriching third user data of the food establishment to obtain an additional training data set, the third user data comprising a sequence of third data points for an additional training data period, each third data point comprising an observed value of the target variable and a value of each of the establishment input features, and the third user data being enriched, for each third data point, using values of the complementary features corresponding to the third data point; linking the additional training data set to the linked sequence of training data sets to obtain an updated linked sequence of training data sets; Quigley teaches: enriching third user data of the food establishment to obtain an additional training data set, the third user data comprising a sequence of third data points for an additional training data period, each third data point comprising an observed value of the target variable and a value of each of the establishment input features, and the third user data being enriched, for each third data point, using values of the complementary features corresponding to the third data point; [0533] Training can be done using training data, which may be collected or generated for training purposes. [0534] In embodiments, the machine learning system 502 trains a model based on training data. In embodiments, the machine learning system 502 may receive vectors containing user data (e.g., transaction history, preferences, wish list virtual assets, and the like), virtual asset data (e.g., price, color, fabric, and the like), and outcomes (e.g., redemption, exchanges, and the like).; [0550] the analytics system 602 may include one or more analytic agents that are configured to execute a set of processes on collected data to produce an analytic result data structure. Once structured, an analytic agent may query the structured data set with a set of queries (e.g., SQL queries) and may further process the results of the queries (e.g., combine, aggregate, perform statistical analyses, and/or the like) to obtain an analytic result data structure. linking the additional training data set to the linked sequence of training data sets to obtain an updated linked sequence of training data sets; and ([1177] Training a machine-learning model may include supervised learning (for example, based on labelled input data), unsupervised learning, and reinforcement learning. One of ordinary skill in the art would reasonably interpret reinforcement learning as an iterative training process to optimize system performance and improve system predictions, performed with an ordered sequence of data updated over time.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, Kivatinos, & Quigley with Quigley’s additional features listed above. One would’ve been motivated to do so in order to improve prediction accuracy, reduce storage space, and increase processing speed (Quigley; [1176]). By incorporating the teachings of Quigley, one would’ve been able to use historical data to train the system. retraining the multi-input machine learning model… ([0083] Examples of machine learning techniques that can be used include, but are not limited to, supervised learning-based techniques (e.g., artificial neural networks, Bayesian-based techniques, decision trees, etc.), unsupervised learning-based techniques (e.g., data clustering, expectation-maximization algorithms, etc.) reinforcement learning based techniques, deep learning-based techniques, and the like. One of ordinary skill in the art would reasonably interpret reinforcement learning as an approach that includes retraining a model to achieve better performance.). Mimassi doesn’t teach: …on the updated linked sequence of training data sets. Quigley further teaches: …on the updated linked sequence of training data sets. ([1177] Training a machine-learning model may include supervised learning (for example, based on labelled input data), unsupervised learning, and reinforcement learning. One of ordinary skill in the art would reasonably interpret reinforcement learning as an iterative training process to optimize system performance and improve system predictions, performed with an ordered sequence of data updated over time.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, and Quigley with Quigley’s additional features listed above. One would’ve been motivated to do so in order to improve prediction accuracy, reduce storage space, and increase processing speed (Quigley; [1176]). By incorporating the teachings of Quigley, one would’ve been able to use updated data to train the system. Regarding claims 21/25: Mimassi further teaches: retraining the multi-input machine learning model… ([0083] Examples of machine learning techniques that can be used include, but are not limited to, supervised learning-based techniques (e.g., artificial neural networks, Bayesian-based techniques, decision trees, etc.), unsupervised learning-based techniques (e.g., data clustering, expectation-maximization algorithms, etc.) reinforcement learning based techniques, deep learning-based techniques, and the like. One of ordinary skill in the art would reasonably interpret reinforcement learning as an approach that includes retraining a model to achieve better performance.). Mimassi doesn’t teach: …on the updated linked sequence of training data sets. Quigley further teaches: …on the updated linked sequence of training data sets. ([1177] Training a machine-learning model may include supervised learning (for example, based on labelled input data), unsupervised learning, and reinforcement learning. One of ordinary skill in the art would reasonably interpret reinforcement learning as an iterative training process to optimize system performance and improve system predictions, performed with an ordered sequence of data updated over time.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, Kivatinos, & Quigley with Quigley’s additional features listed above. One would’ve been motivated to do so in order to improve prediction accuracy, reduce storage space, and increase processing speed (Quigley; [1176]). By incorporating the teachings of Quigley, one would’ve been able to use updated data to train the system. Claims 10 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Mimassi (US 20220198586 A1, hereinafter “Mimassi”), in view of Lee et al. (US 20220358366 A1, hereinafter “Lee”), in further view of Kivatinos et al. (US 20200034707 A1, hereinafter “Kivatinos”), in further view of O’Donoghue et al. (US 20230119186 A1, hereinafter “O’Donoghue”), in further view of Quigley et al. (US 20230131603 A1, hereinafter “Quigley”) as applied to claims 8 and 21 above, in further view of Skeirik (US 5224203 A, hereinafter “Skeirik”). Regarding claim 10/22: Mimassi further teaches: retraining the multi-input machine learning model… ([0083] Examples of machine learning techniques that can be used include, but are not limited to, supervised learning-based techniques (e.g., artificial neural networks, Bayesian-based techniques, decision trees, etc.), unsupervised learning-based techniques (e.g., data clustering, expectation-maximization algorithms, etc.) reinforcement learning based techniques, deep learning-based techniques, and the like. One of ordinary skill in the art would reasonably interpret reinforcement learning as an approach that includes retraining a model to achieve better performance.). Mimassi doesn’t teach: wherein each training data set is identified by a training data identifier, the operations further comprising: receiving a user selection of a training data identifier; generating a modified sequence of training data sets that commences at the training data set corresponding to the user selection and ends at the additional training data set; and … on the modified sequence of training data sets. Quigley teaches: …on the modified sequence of training data sets. ([1177] Training a machine-learning model may include supervised learning (for example, based on labelled input data), unsupervised learning, and reinforcement learning. One of ordinary skill in the art would reasonably interpret reinforcement learning as an iterative training process to optimize system performance and improve system predictions, performed with an ordered sequence of data updated over time.). It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, Kivatinos, & Quigley with Quigley’s additional features listed above. One would’ve been motivated to do so in order to improve prediction accuracy, reduce storage space, and increase processing speed (Quigley; [1176]). By incorporating the teachings of Quigley, one would’ve been able to use updated data to train the system. Quigley doesn’t teach: wherein each training data set is identified by a training data identifier, the operations further comprising: receiving a user selection of a training data identifier; generating a modified sequence of training data sets that commences at the training data set corresponding to the user selection and ends at the additional training data set; and Skeirik teaches: wherein each training data set is identified by a training data identifier, (Fig. 2: Steps 202 and 206 - Store Input Data with Associated Timestamps in Historical Database and Store Training Input Data with Associated Timestamps in Historical Database); the operations further comprising: receiving a user selection of a training data identifier; (Fig. 4: Step 404 – Retrieve Input Data at Current Time from Historical Database.); generating a modified sequence of training data sets that commences at the training data set corresponding to the user selection and ends at the additional training data set; ([Page 49, Column 25, Lines 33-37] The sequence of steps described above is the preferred embodiment used when the neural network 1206 can be effectively trained using a single presentation of the training set created for each new training input data 1306. [Page 49, Column 25, Lines 44-50] the neural network 1206 can save the training sets (that is, the training input data and the associated input data which is retrieved in step and module 308) in a database of training sets, which can then be repeatedly presented to the neural network 1206 to train the neural network. The user might be able to configure the number of training sets to be saved.) Examiner notes that even though the limitation to retraining the multi-input machine learning model on the modified sequence of training data sets is taught by the combination of Mimassi, Lee, Kivatinos, & Quigley as documented above, Skeirik also teaches the same limitation, providing further support to the prior art rejection of said limitation. Skeirik teaches the limitation in at least: Fig. [Page 49, Column 25, Lines 21-25] After the error data 1504 has been computed (calculated) in the step and module 1012, the neural network 1206 is retrained using the error data 1504 and/or the training input data 1306. The present invention contemplates any method of training the neural network 1306. It would have been obvious to one of ordinary skill in the art, at the time of applicant’s invention, to combine Mimassi, Lee, Kivatinos, & Quigley with Skeirik’s features listed above. One would’ve been motivated to do so, so that as new training data becomes available, new training sets are constructed and saved (Skeirik; [Page 49, Column 25, Lines 50-52]). By incorporating the teachings of Skeirik, one would’ve been able to use data identifiers for training data sets. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL J TORRES CHANZA whose telephone number is (571)272-3701. The examiner can normally be reached Monday thru Friday 8am - 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached on (571)270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /G.J.T./Examiner, Art Unit 3625 /TIMOTHY PADOT/Primary Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
Apr 01, 2025
Non-Final Rejection — §101, §103
Jun 06, 2025
Interview Requested
Jun 16, 2025
Examiner Interview Summary
Jun 16, 2025
Applicant Interview (Telephonic)
Jun 23, 2025
Response Filed
Sep 18, 2025
Final Rejection — §101, §103
Nov 19, 2025
Response after Non-Final Action
Dec 18, 2025
Request for Continued Examination
Dec 28, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §101, §103
Mar 05, 2026
Interview Requested
Mar 10, 2026
Interview Requested
Mar 24, 2026
Examiner Interview Summary
Mar 24, 2026
Applicant Interview (Telephonic)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month