Prosecution Insights
Last updated: April 19, 2026
Application No. 18/203,568

MACHINE-LEARNING BASED ARTIFICIAL INTELLIGENCE CAPABILITY

Non-Final OA §102
Filed
May 30, 2023
Examiner
PARK, GRACE A
Art Unit
2144
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
94%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
421 granted / 557 resolved
+20.6% vs TC avg
Strong +18% interview lift
Without
With
+18.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
23 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
17.0%
-23.0% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 557 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claims 5, 6, 8, 10, 15, 16, 18, and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 7, 9, 11-14, 17, and 19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bramble et al. (US Pub. 20220207444). Referring to claim 1, Bramble discloses A method comprising: storing a plurality of artificial intelligence (AI) capabilities in a cloud environment [fig. 1; pars. 29-31; AI capabilities (e.g., pre-trained base models, library of algorithms, customized models, trained models) are provided by AutoML, which is a cloud-based ML service; this means that the AI capabilities are stored in the cloud]; while storing the plurality of AI capabilities [fig. 1; pars. 29-31; note the storing of the AI capabilities in the cloud]: receiving, from a computing device of a user, a request for a particular AI capability of the plurality of AI capabilities [pars. 29-31 and 38; an end user specifies a prediction model (i.e., at least one of the AI capabilities)]; in response to receiving training data based on input from the user, storing the training data in a tenancy associated with the user in the cloud environment [pars. 30, 31, 38, 79, and 82-85; the end user provides input training data for customizing the prediction model; the input training data is imported by a system API at a service endpoint; a multi-tenant model is used to assign the end user computing resources, which means that the data would be imported to cloud storage assigned to a tenancy associated with the user that can be accessed by the system API; also note SaaS, PaaS, IaaS service models]; in response to receiving the request [pars. 29-31 and 38; note the specifying of the prediction model by the end user]: accessing the particular AI capability [pars. 29-31 and 38; note the prediction model] training a machine-learned (ML) model based on the particular AI capability and the training data to produce a trained ML model [pars. 29-31; the imported data is used to train or customize the prediction model (i.e., generate a customized prediction model)]; generating an endpoint, in the cloud environment, that is associated with the trained ML model [par. 31; once the customized prediction model is generated, the customized prediction model can be deployed or exported via the system API at the service endpoint, which means that a pointer to a cloud storage location of the customized prediction model is generated]; providing the endpoint to the tenancy associated with the user [pars. 31, 79, and 82-85; the system API at the service endpoint can access the customized prediction model, which is stored in the cloud using the multi-tenant model]; wherein the method is performed by one or more computing devices [figs. 1 and 5; par. 28; AutoML is implemented within a computing and data processing environment]. Referring to claim 2, Bramble discloses The method of Claim 1, wherein: the particular AI capability comprises a pre-trained model; training the ML model comprises fine-tuning the pre-trained model based on the training data [pars. 29-31; note the pre-trained base models that can be customized using the input training data to generate the customized prediction model]. Referring to claim 3, Bramble discloses The method of Claim 1, wherein: the particular AI capability comprises a framework for training the ML model; training the ML model comprising leveraging the framework to train the ML model [fig. 1; pars. 29-31; note AutoML]. Referring to claim 4, Bramble discloses The method of Claim 1, further comprising: while training the ML model, generating a plurality of statistics associated with training the ML model; storing the plurality of statistics in the tenancy associated with the user [pars. 30, 79, and 82-85; the input training data includes application-specific performance metrics used to evaluate the customized prediction model; note the multi-tenant model for assigning cloud storage associated with the end user]. Referring to claim 7, Bramble discloses The method of Claim 1, further comprising: causing to be presented, on a screen of the computing device of the user, a list of multiple Al capabilities; wherein receiving the request comprises receiving input that selects the particular Al capability from among the Al capabilities in the list [fig. 4; pars. 29, 31, 17, and 51; a user interface is provided to the end user for selecting the prediction model; the system API also includes a list models method]. Referring to claim 9, Bramble discloses The method of Claim 1, wherein the user is a first user and the tenancy is a first tenancy, further comprising: receiving, from a second computing device of a second user that is different than the first user, a second request for the particular Al capability of the plurality of Al capabilities; in response to receiving second training data based on second input from the second user, storing the second training data in a second tenancy that is different than the first tenancy and that is associated with the second user in the cloud environment; in response to receiving the second request: retrieving the particular Al capability; training a second ML model based on the particular Al capability and the second training data; generating a second endpoint, in the cloud environment, that is associated with the second ML model; providing the second endpoint to the second tenancy associated with the second user [see the rejection for claim 1; also note the multi-tenant model, which means that a customized prediction model can be generated for a second end user]. Referring to claim 11, see at least the rejection for claim 1. Bramble further discloses One or more non-transitory storage media storing instructions which, when executed by one or more computing devices, cause the claimed steps to be performed [fig. 5, program modules 11, processor(s) 12]. Referring to claim 12, see the rejection for claim 2. Referring to claim 13, see the rejection for claim 3. Referring to claim 14, see the rejection for claim 4. Referring to claim 17, see the rejection for claim 7. Referring to claim 19, see the rejection for claim 9. Conclusion The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Lin et al. (US Pub. 20210004696) discloses delivery of models from a model warehouse with an authentication service involving encryption. Singh et al. (US Pub. 20240095077) discloses storing pre-trained models in a model registry, user selection of a pre-trained model, and training of the selected pre-trained model using training data provided by the user. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE PARK whose telephone number is (571)270-7727. The examiner can normally be reached M-F 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Grace Park/Primary Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

May 30, 2023
Application Filed
Feb 04, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591807
SKETCHED AND CLUSTERED FEDERATED LEARNING WITH AUTOMATIC TUNING
2y 5m to grant Granted Mar 31, 2026
Patent 12585924
CAUSAL MULTI-TOUCH ATTRIBUTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585728
METHOD AND APPARATUS FOR MACHINE LEARNING BASED INLET DEBRIS MONITORING
2y 5m to grant Granted Mar 24, 2026
Patent 12579150
Hybrid and Hierarchical Multi-Trial and OneShot Neural Architecture Search on Datacenter Machine Learning Accelerators
2y 5m to grant Granted Mar 17, 2026
Patent 12579431
METHOD AND SYSTEM FOR MACHINE LEARNING BASED UNDERSTANDING OF DATA ELEMENTS IN MAINFRAME PROGRAM CODE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
94%
With Interview (+18.2%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 557 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month