DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 4 – 6, 11 – 13, 18 – 19 were amended. Claims 1 – 20 are pending and are examined herein.
Claims 1-20 are rejected under 35 U.S.C. 101.
Claims 1-20 are rejected under 35 U.S.C. 103.
Response to Amendment
The amendment filed October 16th, 2025 has been entered. Claims 4 – 6, 11 – 13, 18 – 19 were amended. Claims 1 – 20 are pending and are examined herein. Applicant’s amendments to the claims have overcome each and every objection and 112(b), (d) rejection previously set forth in the Non-Final Rejection Office Action mailed August 8th, 2025.
Response to Arguments
Applicant's arguments filed October 16th, 2025 regarding the 35 U.S.C. 101 rejection of claims 1 – 20 have been fully considered but they are not persuasive. Applicant rely on feature described in the specifications ([0019], [0021-0023], [0066]) that are not recited in the claim language. The pending claims do not require training or executing a machine learning model, estimating or optimizing pipeline performance, or using any particular similarity computation. Instead, the claims recite segmenting time series data using lookback parameters, determining meta features, comparing meta features, and identifying pipelines and lookback parameters based on the comparison. As such, the claims recite an abstract idea including mathematical concept, mental process, and the additional elements amount to no more than mere instructions to apply the abstract idea using generic computing, which does not integrate the judicial exception into a practical application. Thus, the remarks in response to the 35 U.S.C. 101 rejection are not persuasive and the rejection is maintained.
Applicant's arguments filed October 16th, 2025 regarding the rejections under 35 U.S.C. 103 have been fully considered but are not persuasive. Applicant argues that the applied art selects pipelines based on performance rather than by matching or comparing for the most similar meta features. Laadan teaches extracting “meta features describing both the dataset and each of the… pipelines” and ranking candidate pipelines based on those meta-features. Feurer references in Applicant’s newly submitted IDS expressly teaches using meta features as a similarity metric by “rank[ing] all datasets by their L1 distance… in meta-feature space,”, which is like comparing meta features against meta features to identify the most similar match. Feurer further teaches an offline phase that evaluates meta features and stores framework configurations associated with those meta features. Accordingly, the applied combination teaches identifying pipelines and related configuration parameters based on meta-feature similarity as recited.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 - 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject matter. The analysis of claims 1-20, in accordance with these steps, follows.
Step 1 Analysis:
Step 1 is to determine whether the claim is directed to a statutory category (process, machine, manufacture, or composition of matter.
Claims 1 - 7 are directed to a method, meaning that it is directed to the statutory category of process. Claims 8 - 14 are directed to a system for providing enhanced lookback window searching in a computing environment, which is the statutory category of machine. Claims 15 - 20 are directed to a computer program product for providing enhanced lookback window searching in a computing environment, which is the statutory category of manufacture.
Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis:
Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101.
Regarding claim 1, the following claim elements are abstract ideas:
segmenting timeseries data using lookback window parameters; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
determining meta-features for windowed data; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
identifying those of a plurality of predefined pipelines having a maximum amount of matching one or more predefined meta-features; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
and identifying those of the lookback window parameters that result in the windowed data having the meta-features most similar to the meta-features of one or more of the plurality of predefined pipelines. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components. In addition, finding the most similar using similarity metrics as described in specification could fall under a mathematical concept.)
The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception:
In a computing environment by one or more processors comprising: (This falls under mere instructions to apply an exception. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.)
Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, claim 2 recites the following abstract ideas:
creating the plurality predefined pipelines with the one or more predefined meta-features. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
Regarding claim 3, the rejection of claim 1 is incorporated herein. Further, claim 3 recites the following abstract ideas:
comparing the meta-features of the windowed data to identify those of the plurality of predefined pipelines matching the one or more predefined meta-features. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
Regarding claim 4, the rejection of claim 1 is incorporated herein. Further, claim 4 recites the following abstract ideas:
using the lookback window parameters to identify those of the plurality of predefined pipelines having the maximum amount of the matching one or more predefined meta-features. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
Regarding claim 5, the rejection of claim 1 is incorporated herein. Further, claim 5 recites the following abstract ideas:
Identifying the one or more lookback window parameters having the windowed data with the meta-features the most similar to the one or more meta-features. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components. )
Regarding claim 6, the rejection of claim 1 is incorporated herein. Further, claim 6 recites the following abstract ideas:
modifying one or more of the lookback window parameters to adjust prediction targets. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components. Modifying parameters to adjust prediction is also a mathematical relationship, which is mathematical concept.)
Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following abstract ideas:
selecting the one or more of the plurality of predefined pipelines with selected lookback window parameters. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
Claims 8 – 14 recite substantially similar subject matter to claims 1 – 7 respectively and are rejected with the same rationale, mutatis mutandis.
Claim 8 further recites the following additional element:
One or more computers with executable instructions that when executed cause the system to: (This falls under mere instructions to apply an exception. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.)
Claims 15 – 18, 20 recite substantially similar subject matter to claims 1 – 4, 7 respectively and are rejected with the same rationale, mutatis mutandis.
Claim 15 further recites the following additional element:
One or more computer readable storage media, and program instructions collectively stored on the one or more readable storage media (This falls under mere instructions to apply an exception. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception.)
Claim 19 recite substantially similar subject matter to the combination of claim 5, 6 and is rejected with the same rationale, mutatis mutandis.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-5, 8-12, 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (NPL: “FLOps: On Learning Important Time Series Features for Real-Valued Prediction”) in view of Laadan et al. (NPL: “RankML: Meta Learning-Based Approach for Pre-Ranking Machine Learning Pipelines”), further in view of Feurer et al. (NPL: “Efficient and Robust Automated Machine Learning”).
Regarding Claim 1, Patel teaches
A method for providing enhanced lookback window searching in a computing environment by one or more processors comprising: segmenting timeseries data using lookback window parameters; (Pg. 4 B. Forecasting Models section of Patel states “In this case, we assumed look-back window of length lw < T is provided. Next, we slice the original time series by sliding an overlapping window of length look-back window lw”)
determining meta-features for windowed data; (Pg. 5 V. Statistical Filtering of Times Series Features and Algorithm 1 of Patel states “As discussed in Section III-B, dataset Di is tabulated using suitable look-back window (Line 3 of algorithm). The look-back window is dataset specific and it is derived using spectral and frequency analysis… Once the data is prepared in the form where time series model can be learnt, we initiate feature extraction and score generation process.”)
having a maximum amount of matching (Pg. 2 of Patel states “Next, we apply the Statistical Filtering on the learnt representations and apply various operations, such as FTest, MITest and MLTest, to discover the importance of extracted features with respect to each time series dataset.”)
and identifying those of the lookback window parameters that result in the windowed data (Pg. 5 V. Statistical Filtering of Times Series Features and Algorithm 1 of Patel states “As discussed in Section III-B, dataset Di is tabulated using suitable look-back window (Line 3 of algorithm). The look-back window is dataset specific and it is derived using spectral and frequency analysis… Once the data is prepared in the form where time series model can be learnt, we initiate feature extraction and score generation process.”)
Patel does not explicitly teach that
Identifying those of a plurality of predefined pipelines having … one or more predefined meta-features;
having the meta-features most similar to the meta-features of one or more of the plurality of predefined pipelines.
However, Laadan and Feurer teach that
Identifying those of a plurality of predefined pipelines having … one or more predefined meta-features; (Pg. 1 Abstract of Laadan states “Given a previously-unseen dataset, a performance metric, and a set of candidate pipelines, RankML immediately produces a ranked list of all pipelines based on their predicted performance.” and Pg. 3 4 The Proposed Method Overview section of Laadan states “In the online phase, RankML receives a previously unseen dataset, a set of candidate pipelines and an evaluation metric. We then extract meta-features describing both the dataset and each of the candidate pipelines and use the ranking algorithm to produce a ranked list of the candidate pipelines. Next, the top-ranked pipelines are evaluated.” For maximum amount of matching, use Patel to bring in F-test as they both discuss statistical similarity tests on time series features. Combing with Laadan, they can rank based on F-test and highest rank mean it is the maximum amount of matching. Pg. 3 of Feurer states “In this work, we apply meta-learning to select instantiations of our given machine learning framework that are likely to perform well on a new dataset. More specifically, for a large number of datasets, we collect both performance data and a set of meta-features, i.e., characteristics of the dataset that can be computed efficiently and that help to determine which algorithm to use on a new dataset ... Then, given a new dataset D, we compute its meta-features, rank all datasets by their L1 distance to D in meta-feature space and select the stored ML framework instantiations for the k = 25 nearest datasets for evaluation before starting Bayesian optimization with their results.” These meta features are compared by distance in meta feature space.)
having the meta-features most similar to the meta-features of one or more of the plurality of predefined pipelines. (Pg. 1 Abstract of Laadan states “Given a previously-unseen dataset, a performance metric, and a set of candidate pipelines, RankML immediately produces a ranked list of all pipelines based on their predicted performance.” and Pg. 3 4 The Proposed Method Overview section of Laadan states “In the online phase, RankML receives a previously unseen dataset, a set of candidate pipelines and an evaluation metric. We then extract meta-features describing both the dataset and each of the candidate pipelines and use the ranking algorithm to produce a ranked list of the candidate pipelines. Next, the top-ranked pipelines are evaluated.” Combination of Laadan and Patel enables selection of window whose output meta-features are most aligned with successful pipelines. Pg. 3 of Feurer states “Then, given a new dataset D, we compute its meta-features, rank all datasets by their L1 distance to D in meta-feature space and select the stored ML framework instantiations for the k = 25 nearest datasets for evaluation before starting Bayesian optimization with their results.” These meta features are compared by distance in meta feature space and selection is based on the nearest, meaning the most similar match. )
It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings from Patel with Laadan and Feurer because they both address goal of selecting optimal pipeline configuration with different approach in automated machine learning. Laadan teaches extracting meta features and candidate pipelines and producing a ranked list of candidate pipelines. Feurer further teaches selecting stored ML framework configuration by computing meta features and ranking by distance in meta-feature space to identify the nearest/most similar meta-feature matches. One with ordinary skill in the art would be motivated to incorporate the teachings of Patel into combination of Laadan and Feurer to focus the search on window sizes that produce data representations most compatible with high-performing pipelines, which reduces search cost and improve accuracy. It would have been a predictable combination of applying a known meta-learning approach to automated selection problem. Feurer further teaches an offline phase that stores configurations for later selection.
Regarding Claim 2, the rejection of claim 1 Is incorporated herein. Furthermore, the combination of Patel, Laadan, Feurer teaches
creating the plurality predefined pipelines with the one or more predefined meta-features. (Pg. 3 4.3 Training the Meta-Model of Laadan states “For each evaluated dataset and pipeline combination, we extract their corresponding meta-features and concatenate them. The joined meta-features vectors are used to train the ranking algorithm. The goal of the algorithm is to produce a list of all participating pipelines, ordered by their respective performance on the dataset.” Pg. 3 of Feurer states “In an offline phase, for each machine learning dataset in a dataset repository (in our case 140 datasets from the OpenML [18] repository), we evaluated a set of meta-features (described below) and used Bayesian optimization to determine and store an instantiation of the given ML framework with strong empirical performance for that dataset.”)
Regarding Claim 3, the rejection of claim 1 Is incorporated herein. Furthermore, the combination of Patel, Laadan, Feurer teaches
comparing the meta-features of the windowed data to identify those of the plurality of predefined pipelines matching the one or more predefined meta-features. (Pg. 5 V. Statistical Filtering of Times Series Features and Algorithm 1 of Patel states “As discussed in Section III-B, dataset Di is tabulated using suitable look-back window (Line 3 of algorithm). The look-back window is dataset specific and it is derived using spectral and frequency analysis… Once the data is prepared in the form where time series model can be learnt, we initiate feature extraction and score generation process.” It corresponds to teach the meta-features of the windowed data. Pg. 3 4 The Proposed Method Overview of Laadan states “In the online phase, RankML receives a previously unseen dataset, a set of candidate pipelines and an evaluation metric. We then extract meta-features describing both the dataset and each of the candidate pipelines and use the ranking algorithm to produce a ranked list of the candidate pipelines. Next, the top-ranked pipelines are evaluated.” Pg. 3 4.3 Training the Meta-Model of Laadan states “For each evaluated dataset and pipeline combination, we extract their corresponding meta-features and concatenate them. The joined meta-features vectors are used to train the ranking algorithm.”)
Regarding Claim 4, the rejection of claim 1 Is incorporated herein. Furthermore, the combination of Patel, Laadan, Feurer teaches
using the lookback window parameters to identify those of the plurality of predefined pipelines having the maximum amount of the matching one or more predefined meta-features. (Pg. 5 V. Statistical Filtering of Times Series Features and Algorithm 1 of Patel states “As discussed in Section III-B, dataset Di is tabulated using suitable look-back window (Line 3 of algorithm). The look-back window is dataset specific and it is derived using spectral and frequency analysis… Once the data is prepared in the form where time series model can be learnt, we initiate feature extraction and score generation process.” Pg. 3 4.3 Training the Meta-Model of Laadan states “For each evaluated dataset and pipeline combination, we extract their corresponding meta-features and concatenate them. The joined meta-features vectors are used to train the ranking algorithm. The goal of the algorithm is to produce a list of all participating pipelines, ordered by their respective performance on the dataset.” Patel generate different sets of meta-features by adjusting the lookback window parameters and using this to Laadan’s ranking algorithm can identify the pipelines that have the most matching meta-features. )
Regarding Claim 5, the rejection of claim 1 Is incorporated herein. Furthermore, the combination of Patel, Laadan, Feurer teaches
Identifying the one or more lookback window parameters having the windowed data with the meta-features the most similar to the one or more meta-features. (Pg. 5 V. Statistical Filtering of Times Series Features and Algorithm 1 of Patel states “As discussed in Section III-B, dataset Di is tabulated using suitable look-back window (Line 3 of algorithm). The look-back window is dataset specific and it is derived using spectral and frequency analysis… Once the data is prepared in the form where time series model can be learnt, we initiate feature extraction and score generation process.” Pg. 1 Abstract of Laadan states “Given a previously-unseen dataset, a performance metric, and a set of candidate pipelines, RankML immediately produces a ranked list of all pipelines based on their predicted performance.” And Pg. 3 4 The Proposed Method Overview section of Laadan states “In the online phase, RankML receives a previously unseen dataset, a set of candidate pipelines and an evaluation metric. We then extract meta-features describing both the dataset and each of the candidate pipelines and use the ranking algorithm to produce a ranked list of the candidate pipelines. Next, the top-ranked pipelines are evaluated.” Combing their teaching enables selection of window whose output meta-features are most aligned with successful pipelines. )
Claims 8-12 recite substantially similar subject matter as claims 1-5 respectively, and are rejected with the same rationale, mutatis mutandis.
Claims 15-18 recite substantially similar subject matter as claims 1-4 respectively, and are rejected with the same rationale, mutatis mutandis.
Claims 6-7, 13-14, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (NPL: “FLOps: On Learning Important Time Series Features for Real-Valued Prediction”) in view of Laadan et al. (NPL: “RankML: Meta Learning-Based Approach for Pre-Ranking Machine Learning Pipelines”), Feurer et al. (NPL: “Efficient and Robust Automated Machine Learning”), further in view of Shah et al. (NPL: “AutoAI-TS: AutoAI for Time Series Forecasting”).
Regarding Claim 6, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Patel, Laadan, and Feurer does not explicitly teach that modifying one or more of the lookback window parameters to adjust prediction targets.
However, Shah teaches
modifying one or more of the lookback window parameters to adjust prediction targets. (Pg. 4 4.1 Look-back Window Computation of Shah states “AutoAI-TS does not assume prior knowledge about input data, hence we propose and implement an automatic look-back window length discovery mechanism, which for given input data computes most suitable look-back window to be used by an deep learning and ML models for time series forecasting” Automatic lookback window length discovery mechanism is basically automatic adjustment of the lookback window to accurately forecast(predict) values of timeseries.)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings from Patel, Laadan, Feurer with Shah because Shah illustrates the concept of choosing both optimal look-back window and the pipeline for time-series forecasting. One with ordinary skill in the art would be motivated to incorporate the teachings of Shah into combination of Patel, Laadan, Feurer because Shah’s “automatic look-back window length estimation mechanism for the ML-based pipeline” enable automation of efficient window-parameter tuning and meta-feature guided pipeline ranking. Therefore, combination of Patel, Laadan, Feurer, and Shah would have been obvious for a POSITA to achieve a AutoML workflow using window-length estimation, hyperparameter tuning, and meta-feature ranking.
Regarding Claim 7, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Patel, Laadan, Feurer, and Shah teaches
selecting the one or more of the plurality of predefined pipelines with selected lookback window parameters. (Pg. 4 4 System Architecture and Fig 2. Overall Architecture of Shah states “After look-back size has been decided, pipelines are generated from existing models and transformation… These pipelines are provided to pipeline selection mechanism shown as Time Series Daub in figure 2. The Time Series Daub (T-Daub) is given the training part of the input data and creates various splits from it and trains the pipelines on these data splits to approximate the accuracy of these pipelines on full input data. It then ranks these pipelines according to the approximate expected performance and one or more top performing pipelines are chosen, more details on T-Daub are provided in section 4.2.” As lookback size is decided, pipelines are generated and these are ranked to select one or more top performing pipelines out of those pipelines. )
Claims 13-14 recite substantially similar subject matter as claims 6-7 respectively, and are rejected with the same rationale, mutatis mutandis.
Claims 19-20 recite substantially similar subject matter as claims 5+6, 7 respectively, and are rejected with the same rationale, mutatis mutandis.
Conclusion
Applicant's submission of an information disclosure statement under 37 CFR 1.97(c) with the timing fee set forth in 37 CFR 1.17(p) on 08/04/2025 prompted the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 609.04(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BYUNGKWON HAN whose telephone number is (571)272-5294. The examiner can normally be reached M-F: 9:00AM-6PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at (571)272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BYUNGKWON HAN/Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121