Prosecution Insights
Last updated: April 19, 2026
Application No. 18/738,025

Automated Processing of Multiple Prediction Generation Including Model Tuning

Final Rejection §101§103
Filed
Jun 09, 2024
Examiner
MISIR, DAYWAYSHWAR D
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Databricks Inc.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
451 granted / 538 resolved
+28.8% vs TC avg
Strong +48% interview lift
Without
With
+47.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
32.5%
-7.5% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 538 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The previous objections to the claims are withdrawn based on the amendments to those claims. Response to Arguments Applicant’s arguments have been fully considered but they are not persuasive. In regards to applicants’ arguments on the 35 USC 101 abstract idea rejections, the Examiner disagrees and points out that the deployment of the models on the compute resources as argued in the amended limitation reads on as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g). In regards to applicants’ arguments that the prior art does not teach the amended limitation, the Examiner disagrees and points out that Ramanan teaches a job scheduler for the machine learning models, that among other things are based on compute resources, see for example paragraph 83. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2-6, 8-13, 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: All claims are directed towards either a method, a non-transitory computer-readable storage medium, or a computer system, and thus satisfies Step 1 as falling into one of the statutory categories. Step 2A, Prong One: Independent Claim 2 recites (the same analysis applies to similar independent Claims 9 and 16): determining whether drift has occurred with respect to the dataset used to train the set of machine-learning models, comprising determining whether a difference between the current dataset and the dataset exceeds a threshold; this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of determining a difference (and therefore drift) between datasets based on a threshold using observation and evaluation. Step 2A, Prong Two: Independent Claim 2 recites the additional elements of (the same analysis applies to similar independent Claims 9 and 16): accessing a set of machine-learning models trained using a dataset, wherein the dataset comprises at least a set of keys and key-values for the set of keys, and wherein each machine-learning model is trained using a respective subset of the dataset corresponding to a respective set of key-values for the set of keys; this limitation is considered as adding insignificant extra-solution activity (retrieving data) to the judicial exception - see MPEP 2106.05(g). The training of the machine learning model being considered as applying instructions to use a machine learning model as a tool which includes training the model - see MPEP 2106.05(f). receiving, via an interface, a current dataset that is an updated dataset; this limitation is considered as adding insignificant extra-solution activity (receiving data) to the judicial exception - see MPEP 2106.05(g). responsive to a determination the drift has occurred and the difference exceeds the threshold, tuning parameters of one or more machine-learning models in the set of machine-learning models using the current dataset to generate another set of machine-learning models; tuning parameters of the machine learning models to generate another/updated set of models is considered as training machine learning models. And, the training of machine learning models are considered as applying instructions to use a machine learning model as a tool which includes training the models - see MPEP 2106.05(f). deploying a first machine-learning model corresponding to a first set of key-values on a first compute resource and deploying a second machine-learning model corresponding to a second set of key-values on a second compute resource different from the first compute resource, wherein each compute resource is a virtual machine; This limitation is considered as adding insignificant extra-solution activity (providing data) to the judicial exception - see MPEP 2106.05(g). and exposing the another set of machine-learning models via an interface of a computing service. This limitation is considered as adding insignificant extra-solution activity (providing data) to the judicial exception - see MPEP 2106.05(g). The additional elements of Claims 9 and 16 that recites “a processor system” are recited at a high-level of generality and serves only to apply the judicial exceptions to a generic computing component. Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea does not negate the abstract idea - see MPEP 2106.05(f). As such, the additional elements do not provide a practical application. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are considered as adding the words “apply it” (or an equivalent) with the judicial exception - see MPEP 2106.05(f), as adding insignificant extra-solution activity (receiving data and providing data) to the judicial exception - see MPEP 2106.05(g) and MPEP 2106.05(d), and no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are therefore not patent eligible. Dependent Claim 3 (and similar dependent Claims 10 and 17) are also directed to an abstract idea without significantly more as pointed out below: Step 2A, Prong One: for a selected machine-learning model, determining whether an accuracy of the selected machine-learning model with respect to the dataset or the current dataset has decreased below another threshold; this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of determining whether an accuracy of the selected machine-learning model with respect to the dataset or the current dataset has decreased below a threshold using observation and evaluation. Step 2A, Prong Two: The additional limitations of: receiving another current dataset; is considered as adding insignificant extra-solution activity (receiving data) to the judicial exception - see MPEP 2106.05(g). responsive to a determination that the accuracy of the selected machine-learning model has decreased below the another threshold, tuning parameters of the selected machine-learning model using the another current dataset. tuning parameters of the machine learning models is considered as training machine learning models. And, the training of machine learning models are considered as applying instructions to use a machine learning model as a tool which includes training the models - see MPEP 2106.05(f). As such, the additional elements do not provide a practical application. Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are considered as adding the words “apply it” (or an equivalent) with the judicial exception - see MPEP 2106.05(f), and as adding insignificant extra-solution activity (receiving data) to the judicial exception - see MPEP 2106.05(g). The claims are therefore not patent eligible. Dependent Claims 4-6, 8, 11-13, 15, 18-20 are also all considered as adding insignificant extra- solution activity (allocating ML models to compute resources, caching datasets, providing ML models via an API, and defining the keys or dataset) to the judicial exception - see MPEP 2106.05(g) and does not negate the abstract idea. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-21 are rejected under 35 U.S.C. 103 as being unpatentable over Parangi, US 2021/0232920 A1, in view of Ramanan, US 2023/0123157 A1. Regarding Claim 2, Parangi teaches: A computer-implemented method, comprising: accessing a set of machine-learning models trained using a dataset (paragraphs 4, 31: discusses training and accessing the trained machine learning models), wherein the dataset comprises at least a set of keys and key-values for the set of keys (Fig. 3C; paragraphs 25-26: receiving dataset with is characteristics that includes data types for the rows and columns of data in the data set, or keys; and the dates, names, unique IDs, categories, etc. of the data types, that is the key-values. Examiner's note: Markus, US 2016/0162779 A1, also teaches key-value pairs of the dataset, see for example Abstract and paragraph 4), and wherein each machine-learning model is trained using a respective subset of the dataset corresponding to a respective set of key-values for the set of keys (paragraphs 26, 28-29, 39: using characteristics of the dataset such as data type for each column of data in the data set, that is the dataset format information or groupings, to generate/train the one or more machine learning models; and using a relatively small number of samples instead of the entire dataset for training, that is using a subset of the dataset for training the model); receiving, via an interface, a current dataset that is an updated dataset (paragraph 24: receiving data using a user interface engine that can include new or updated data); and exposing the another set of machine-learning models via an interface of a computing service (paragraph 31: wherein an API can be programmatically used by the user and that exposes the models to the user for usage). Parangi may not have taught: determining whether drift has occurred with respect to the dataset used to train the set of machine-learning models, comprising determining whether a difference between the current dataset and the dataset exceeds a threshold; responsive to a determination the drift has occurred and the difference exceeds the threshold, tuning parameters of one or more machine-learning models in the set of machine-learning models using the current dataset to generate another set of machine-learning models However, Ramanan shows (Abstract; paragraphs 20-21: discusses retraining the machine learning models on current data based on a determination that a drift has occurred with respect to historical/past training data; the determination based on a difference between the two datasets using a threshold. Tuning parameters of one or more machine-learning models using the current dataset is equivalent to retraining the models, the retraining producing the another set of models); And, deploying a first machine-learning model corresponding to a first set of key-values on a first compute resource and deploying a second machine-learning model corresponding to a second set of key-values on a second compute resource different from the first compute resource, wherein each compute resource is a virtual machine (paragraph 83: “job scheduler 420 determines a set of one or more models that are to be updated (e.g., re-trained) based at least in part on detection of temporal drift with respect to the set(s) of data used by the set of one or more models”; and, “job scheduler 420 is a cron job that wakes up daily (e.g., every midnight or at another preset time when compute resources are not in high demand, etc.) and determines whether temporal drift has occurred with respect to the set(s) of data used by the set of one or more models (or invokes such a determination), and/or determines set of one or more models that are to be updated (e.g., re-trained) based at least in part on detection of temporal drift with respect to the set(s) of data used by the set of one or more models”. Paragraph 85: “Evaluator module 450 assesses the model to determine whether the model is suitable for deployment”. Paragraph 34: “security platform 140 can optionally perform static/dynamic analysis in cooperation with one or more virtual machine (VM) servers”. That is as discussed collectively, the job scheduler of Ramanan can deploy models to virtual machines based on compute resources of the virtual machines). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Ramanan with that of Parangi for determining whether drift has occurred with respect to the dataset used to train the set of machine-learning models, determining whether a difference between the current dataset and the dataset exceeds a threshold, and responsive to a determination the drift has occurred and the difference exceeds the threshold, tuning parameters of one or more machine-learning models in the set of machine-learning models using the current dataset to generate another set of machine-learning models; and deploying a first machine-learning model corresponding to a first set of key-values on a first compute resource and deploying a second machine-learning model corresponding to a second set of key-values on a second compute resource different from the first compute resource, wherein each compute resource is a virtual machine. The ordinary artisan would have been motivated to modify Parangi in the manner set forth above for the purposes of determining whether to retrain or update a machine learning model based on detection of drift in the training data [Ramanan: Abstract; paragraphs 20-21]. Regarding Claim 3, Ramanan further teaches: The computer-implemented method of claim 2, comprising: receiving another current dataset; for a selected machine-learning model, determining whether an accuracy of the selected machine-learning model with respect to the dataset or the current dataset has decreased below another threshold; and responsive to a determination that the accuracy of the selected machine-learning model has decreased below the another threshold, tuning parameters of the selected machine-learning model using the another current dataset (Abstract; paragraphs 20-21, 83, 88, 123: discusses retraining the machine learning model on current data based on a determination that a drift has occurred with respect to historical/past training data; the determination based on a difference between the two datasets using a threshold. Further, this retraining is done iteratively based on new sets of data, that is another current dataset. Tuning parameters of the machine-learning model using a current dataset is equivalent to retraining the model). Regarding Claim 4, Ramanan further teaches: The computer-implemented method of claim 2, comprising: for each machine-learning model of the one or more machine-learning models, allocating the machine-learning model to a respective compute resource in a set of compute resources for tuning the parameters of the machine-learning model (paragraph 83: a job scheduler determines resources to be used in updating the model. See also Parangi, paragraph 83, that discusses scheduling tasks, that can include machine learning model training, based on and using system resources). Regarding Claim 5, Ramanan further teaches: The computer-implemented method of claim 4, further comprising: for each machine-learning model of the one or more machine-learning models, caching a corresponding dataset of the current dataset for tuning the parameters of the machine-learning model in the respective compute resource (paragraph 46: discusses storing information pertaining to a model, such a training data in cache. The cache memory can be for a particular compute resource). Regarding Claim 6, Parangi further teaches: The computer-implemented method of claim 2, wherein the another set of machine-learning models is exposed as a composite model via the interface, and wherein the interface is an application programming interface (API) or a web interface (paragraph 31: wherein an API can be programmatically used by the user and that exposes the models to the user for usage. See also Ramanan, paragraph 70). Regarding Claim 7, Parangi further teaches: The computer-implemented method of claim 2, further comprising: receiving, from a client device, a query via the interface exposing the another set of machine-learned models (paragraphs 4, 13, 24, 26, 36, 44: receiving request/data and its information/parameters used for a prediction); selecting at least one of the another set of machine-learning models for servicing the query (paragraphs 26, 39: determining the best suited model for the prediction/task from the machine learning models); generating a prediction for the query using the at least one machine-learning model (paragraphs 4, 24, 26, 31: generating the prediction); and providing the prediction to the client device as a response to the query (paragraphs 31, 59: providing the prediction to the user). Regarding Claim 8, Parangi further teaches: The computer-implemented method of claim 2, wherein the set of keys represent one or a combination of geographical region, item, price, date, and time information (Fig. 3C; paragraphs 25-26: shows and discusses the data with is characteristics that includes data types for the rows and columns of data in the data set, or keys, which includes dates, names, unique IDs, categories, etc. of the data. Examiner's note: Markus, US 2016/0162779 A1, also teaches key-value pairs of the dataset, see for example Abstract and paragraph 4). Claims 9-15 are similar to Claims 2-8 and are rejected under the same rationale as stated above for those claims. Claims 16-21 are similar to Claims 2-7 and are rejected under the same rationale as stated above for those claims. Examiner’s Note: The Examiner cites particular pages, sections, columns, line numbers, and/or paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner and the additional related prior arts made of record that are considered pertinent to applicant's disclosure to further show the general state of the art. The Examiner's interpretations in parenthesis are provided with the cited references to assist the applicants to better understand how the examiner interprets the prior art to read on the claims. Such comments are entirely consistent with the intent and spirit of compact prosecution. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See PTO-892 for the relevant and pertinent prior art relating to this application where for example Walters, US 2021/0287048 A1, teaches generating a plurality of data categories based on a sample dataset and generating a plurality of primary models of different model types using data from the corresponding one of the data categories as training data. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVE MISIR whose telephone number is (571)272-5243. The examiner can normally be reached M-R 8-5 pm, F some hours. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar can be reached on 5712703169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVE MISIR/Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Jun 09, 2024
Application Filed
Sep 10, 2025
Non-Final Rejection — §101, §103
Oct 27, 2025
Response Filed
Nov 13, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602619
MACHINE LEARNING SYSTEM AND MACHINE LEARNING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12585991
DIGITAL RIGHTS MANAGEMENT OF MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579475
ARTIFICIAL INTELLIGENCE MODEL GENERATED USING AGENTIC WORKFLOW SYSTEM AND METHOD FOR ARTIFICIAL INTELLIGENCE MODEL ALIGNED WITH DOMAIN-SPECIFIC PRINCIPLES
2y 5m to grant Granted Mar 17, 2026
Patent 12572802
METHODS AND DEVICES IN PERFORMING A VISION TESTING PROCEDURE ON A PERSON
2y 5m to grant Granted Mar 10, 2026
Patent 12562242
DATA DRIVEN FEATURIZATION AND MODELING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+47.8%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 538 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month