Prosecution Insights
Last updated: April 19, 2026
Application No. 18/117,425

CUSTOMIZATION OF FORECASTING SOLUTIONS

Non-Final OA §101§103§112
Filed
Mar 04, 2023
Examiner
BRAHMACHARI, MANDRITA
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Zycus Infotech Pvt Ltd.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
311 granted / 407 resolved
+21.4% vs TC avg
Strong +30% interview lift
Without
With
+29.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
434
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 407 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The action is in response to claims dated 3/4/2023 Claims pending in the case: 1-8 Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in India on 2/24/2022. It is noted, however, that applicant has not filed a certified copy of the foreign application as required by 37 CFR 1.55. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1-12 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 recites “obtain ensembled weights from: ensemble recurrent neural network (RNN) type architecture; a one versus rest ensemble RNN architecture; and forest of ensemble; to obtain an appropriate forecasting model that includes dynamically adaptive weights from ground truth along with the ensembled weights to create a final single weighted output”. The limitation does not specify how the dynamically adaptive weights may be generated. The specification mentions adaptive weights but does not have any specifics on how the weights may be adapted. The examiner was unable to find support for this limitation in the current specification. The applicant is requested to identify the paragraphs and lines in the specification that supports this limitation. All claims dependent on these claim are also rejected under 35 U.S.C. 112(a) due to the virtue of their respective direct and indirect dependencies. Claim 8 recites “the dynamically adaptive weight module uses ground truth to generate additional error apart from error generated by respective neural network models”. The limitation does not provide specifics on how this additional error is generated. The specification repeats this language without adding information on what ground truth data is being used and how this error is to be calculated. The examiner was unable to find support in the current specification. Claim 8 further recites “the weights from dynamically adaptive weights module are merged with generated weights of the respective neural network architecture by applying distinct mathematical expressions like but not limited to product, sum, mean, median, mode to normalize and generate a single weighted output”. Neither the claims nor the specification provides specifics on how the weights are to be merged. The examiner was unable to find support in the current specification. The applicant is requested to identify the paragraphs and lines in the specification that supports this limitation. Claim 8 further recites “identified raw data is processed for features like but not limited to price,…” and further “by applying distinct mathematical expressions like but not limited to product, sum,…”. The limitation is attempting to claim content beyond the scope of the specification as well as what is currently known in the art. Claim(s) 8 further recites “…missing data for any feature is generated using formulas 1 and 2 to obtain a final processed dataset”. The claim has no specifics on formulas 1 and 2 clarifying how this missing data is generated. While the specification provides some equation, the variables in the equations are not defined. Formulas 1 and 2 in the specification are not provided in a way as to reasonably convey to one skilled in the relevant art how they may be used to identify and generate missing data. Claim(s) 8 further recites “the recurrent neural network architecture processes the dataset as one versus rest on top feature to create multiple recurrent neural network architectures, and also creates forest of ensemble of such multiple recurrent neural network architectures for different feature grouping combined together”. With RNN and one versus rest being two entirely different concepts, the limitations do not explain how an RNN processes as one versus rest to create multiple networks. The examiner was unable to find any clarifying explanation in the specification. All claims dependent on this/these claim(s) are also rejected under 35 U.S.C. 112(a) due to the virtue of their respective direct and indirect dependencies. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-8 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim(s) 1 in the relevant part read: “ obtain ensembled weights from: …a one versus rest ensemble RNN architecture; and forest of ensemble; ”. Based on the claim language, it is unclear what is being referred to as “a one versus rest ensemble RNN architecture; and forest of ensemble”. "One vs. rest" and "RNN" refer to different concepts in machine learning. The former is a multi-class classification strategy while an RNN is a neural network designed for sequential data and hence it is not clear what is being referred to as “a one versus rest ensemble RNN architecture”. It is also not clear what is being referred to as “forest of ensemble”. It is unclear if this is referring to random forest or any tree ensemble learner. The terms used in the claim limitation is not standard in the art and the specification does not provide a definition. As such, a person of reasonable skill in the art would not be apprised of the metes and bounds of the invention. For the purpose of examination, the limitation “a one versus rest ensemble RNN architecture; and forest of ensemble” is interpreted based on specification [67-68] as multiple RNNs using different feature sets of the input data. All claims dependent on this/these claim(s) are also rejected under 35 U.S.C. 112(b) due to the virtue of their respective direct and indirect dependencies. Claim(s) 1 in the relevant part read: “to obtain an appropriate forecasting model that includes….”. It is unclear what criteria may be considered as “appropriate”. The terms “appropriate” do not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim(s) 1 in the relevant part read: “…such that the final weighted single output/forecast result has more accuracy and reduced false positives”, The term “more” and “reduced” are relative terms which renders the claim indefinite. The terms do not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. All claims dependent on this/these claim(s) are also rejected under 35 U.S.C. 112(b) due to the virtue of their respective direct and indirect dependencies. Claim(s) 2 in the relevant part read: “…such that the deduced data is as good as real data”, The term “as good as” is relative term which renders the claim indefinite. The term does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim(s) 4 in the relevant part read: “…the recurrent neural network (RNN) is applied in one versus rest ensemble neural network architecture..”, It is unclear what is being referred to as RNN applied in one versus rest ensemble. "One vs. rest" and "RNN" refer to different concepts in machine learning and therefore it is unclear what the RNN application is, in a one versus rest ensemble. Thus one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. A reasonable interpretation of this limitation was not possible. For the purpose of examination the limitation is mapped to RNN and statistical models like one vs. rest being used together in an ensemble. Claim(s) 8 in the relevant part read: “…missing data for any feature is generated using formulas 1 and 2 to obtain a final processed dataset”, It is unclear what is being referred to as formulas 1 and 2. Thus one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. A reasonable interpretation of this limitation was not possible. Claim(s) 8 in the relevant part read: “identified raw data is processed for features like but not limited to price,…” and further “by applying distinct mathematical expressions like but not limited to product, sum,…”. It is unclear what additional features and expressions are being claimed. A reasonable interpretation of this limitation was not possible. Claim(s) 8 in the relevant part read: “the recurrent neural network architecture processes the dataset as one versus rest on top feature to create multiple recurrent neural network architectures, and also creates forest of ensemble of such multiple recurrent neural network architectures for different feature grouping combined together”. It is unclear what is being referred to by “the recurrent neural network architecture processes the dataset as one versus rest…”. "One vs. rest" and "RNN" refer to different concepts in machine learning. The former is a multi-class classification strategy while an RNN is a neural network designed for sequential data and hence this limitation is not clear. It is unclear in what way these two concepts are being combined. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Independent claim 1 recites a “A system for the customization of forecasting solutions, comprising: raw data and ensemble neural network architecture”. Neural network architecture is a model for a software program implemented using software code and algorithms. There is no associated structural component within the claimed limitations, and as such the claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter. All claims dependent on this/these claims, is/are also rejected due to their direct or indirect dependencies. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi (Combining LSTM Network Ensemble via Adaptive Weighting for Improved Time Series Forecasting) in view of Amiri (US 20230022401). Regarding Claim 1, Choi teaches, a system for the customization of forecasting solutions, comprising: raw data and ensemble neural network architecture (Choi: abstract, Pg. 2 col 1 [2-3]: ensemble forecast using multiple LSTM models with publicly available raw data); … obtain a processed dataset that is non-discreet and continuous (Choi: Pg. 3 col 1 [1]: sequential data as input), and the ensemble neural network architecture is customized to include plurality of dependent and independent features to obtain ensembled weights from (Choi: Fig. 1, Pg. 2 col 1 [1], col 2 [1]: combined weights from individual models) ensemble recurrent neural network (RNN) type architecture (Choi: abstract, Pg. 2 col 1 [2-3]: ensemble forecast using multiple LSTM models); ….; to obtain an appropriate forecasting model that includes dynamically adaptive weights from ground truth along with the ensembled weights to create a final single weighted output such that the final weighted single output/forecast result has more accuracy and reduced false positives (Choi: Fig. 1, Pg. 2 col 1 [3-col 2 [1]: ensemble forecasting model with ensembled weights to create final improved forecast; Pg. 4 col 1 section 4 [2]: validation data for finding combined weights (ground truth may be used for validation)); However, Choi does not specifically teach, wherein the raw data is customized by cleaning and augmenting; a one versus rest ensemble RNN architecture; and forest of ensemble; Amiri teaches, wherein the raw data is customized by cleaning and augmenting (Amiri: Fig. 1, [34]: find patterns in time series data and customize input data for use by the forecaster models – removes data not similar and adds similar data from different historical data (clean and augment)); a one versus rest ensemble RNN architecture; and forest of ensemble (Amiri: [25]: “The set of models used for prediction is not fixed and can incorporate any types of time series forecasters, such as those with different levels of complexity including Naïve, statistical models, Random Forest or other decision trees, regression neural network (RNN) models, etc.”); Please also refer to the 112b rejection above. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Choi and Amiri because the combination would enable preparing the raw data for use as input in different models capable for different data types in an ensemble architecture. The combination enables the use of different RNN architectures in an ensemble model to improve forecast performance (see Amiri [21]) and provides and “automated and fast way of selecting forecasters that are well suited for various time series data types” (see Amiri [4]). Regarding claim 3, Choi and Amiri teach the invention as claimed in claim 1 above and, wherein the ensemble neural network architecture of the present invention is a recurrent neural network (RNN) which creates dynamically adaptive weights for plurality of features run separately in a time-series fashion (Choi: Fig. 1, Pg. 2 col 1 [3-col 2 [1]: “update combining weights at each time step in an adaptive and recursive way”). Regarding claim 5, Choi and Amiri teach the invention as claimed in claim 1 above and, wherein the forest of ensemble is a combination of multiple RNN heterogeneous architecture of different feature set/grouping on different architecture on same data set, such that the final output of the forest is the combination of ensemble weights and dynamically adaptive weights deduced from dynamic loss function from group of RNN architecture (Choi: abstract, Pg. 2 col 1 [2-3], col 2 [1]: ensemble forecast using multiple RNN models and dynamically adjusting weights). Regarding claim 6, Choi and Amiri teach the invention as claimed in claim 1 above and, wherein the ensemble of RNN architecture of the present invention is enabled to generate output for n epochs and pass on the information to the ground truth for validation to readjust the ensemble weights to obtain data stabilization (Choi: Fig. 1, Pg. 2 col 1 [3-col 2 [1]: ensemble forecasting model with ensembled weights to create final improved forecast; Pg. 4 col 1 section 4 [2]: validation data for finding combined weights (ground truth)). Regarding claim 7, Choi and Amiri teach the invention as claimed in claim 1 above and, wherein the forecasting model obtained from a training module is used in the query module for forecasting (Choi: abstract: forecasting model) (Amiri: [6]: forecasting model). Claim(s) 2, 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Choi (Combining LSTM Network Ensemble via Adaptive Weighting for Improved Time Series Forecasting) in view of Amiri (US 20230022401) in further view of Garvey (US 20200125988) and Bowles (The ECB Survey of Professional Forecasters). Regarding claim 2, Choi and Amiri teach the invention as claimed in claim 1 above and, wherein the cleaning and augmentation … to deduce missing values in raw data and, to do relevancy ranking to group top features to group top features such that the deduced data is as good as real data (Amiri: Fig. 1, [34]: find patterns in time series data and customize relevant data for use as input by the forecaster models – evaluates degree of correlation (deduce and rank relevancy) of features in the data); but not is done using uncertainty co-efficient; Garvey teaches, is done using uncertainty data (Garvey: [85-86, 92]: Data analytics to deduce patterns by analyzing series (missing values and relevancy); [159]: “compute season-specific uncertainty intervals”); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Choi, Amiri and Garvey because the combination would enable preparing the raw data for use as input models by using known statistical methods like uncertainty and relevance determination. Although Garvey does not specifically recite uncertainty co-efficient, using uncertainty coefficient (Theil’s U) as a measure of the association between two nominal variables is a well known approach in the art for comparing variables. It would have been obvious to one skilled in the art to use any known method to determine and group relevant data; Nonetheless, Bowles teaches, using uncertainty co-efficient (Bowles: PG. 13 section 2.1 [1], Pg. 14 Box 1 last paragraph: use Theil’s U to compare two data points); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Choi, Amiri, Garvey and Bowles because the combination would enable using known statistical methods like Theil’s U statistic to evaluate data differences in series data. Regarding claim 4, Choi and Amiri teach the invention as claimed in claim 1 above and, wherein the recurrent neural network (RNN) is applied in one versus rest ensemble neural network architecture model on top features … to obtain combined output of different grouping features (Amiri: Fig. 1, [34]: find patterns in time series data and customize relevant data for use as input by the forecaster models – identify features; [25]: “The set of models used for prediction is not fixed and can incorporate any types of time series forecasters, such as those with different levels of complexity including Naïve, statistical models, Random Forest or other decision trees, regression neural network (RNN) models, etc.” – “one versus rest” is a statistical model – statistical models and RNN in an ensemble); but not, top features identified by uncertainty coefficient Garvey further teaches, top features identified by uncertainty coefficient (Garvey: [68, 78-82, 159]: compute uncertainty); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Choi, Amiri and Garvey because the combination would enable preparing the raw data for use as input models by using known statistical methods like uncertainty and relevance determination. Although Garvey does not specifically recite uncertainty co-efficient, using uncertainty coefficient (Theil’s U) as a measure of the association between two nominal variables is a well-known approach in the art for comparing variables. It would have been obvious to one skilled in the art to use any known method to determine and group relevant data; Nonetheless, Bowles teaches, using uncertainty co-efficient (Bowles: PG. 13 section 2.1 [1], Pg. 14 Box 1 last paragraph: use Theil’s U to compare two data points); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Choi, Amiri, Garvey and Bowles because the combination would enable using known statistical methods like Theil’s U statistic to evaluate data differences in series data; Please also refer to the 112b rejection above. Regarding claim 8, prior art rejection has not been presented as a reasonable interpretation of the limitations was not possible. Please refer to the 112(a) and 112(b) rejections above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure in the attached 892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANDRITA BRAHMACHARI whose telephone number is (571)272-9735. The examiner can normally be reached Monday to Friday, 11 am to 8 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Mandrita Brahmachari/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Mar 04, 2023
Application Filed
Dec 17, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596746
AUDIO PREVIEWING METHOD, APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12596469
COMBINED DATA DISPLAY WITH HISTORIC DATA ANALYSIS
2y 5m to grant Granted Apr 07, 2026
Patent 12591358
DAMAGE DETECTION PORTAL
2y 5m to grant Granted Mar 31, 2026
Patent 12585979
MANAGING DATA DRIFT AND OUTLIERS FOR MACHINE LEARNING MODELS TRAINED FOR IMAGE CLASSIFICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585992
MACHINE LEARNING WITH ATTRIBUTE FEEDBACK BASED ON EXPRESS INDICATORS
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+29.8%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 407 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month