Prosecution Insights
Last updated: April 19, 2026
Application No. 18/197,224

OBJECTIVE FUNCTION OPTIMIZATION IN TARGET BASED HYPERPARAMETER TUNING

Non-Final OA §103
Filed
May 15, 2023
Examiner
HUANG, YAO D
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
63%
Grant Probability
Moderate
1-2
OA Rounds
3y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
78 granted / 124 resolved
+7.9% vs TC avg
Strong +32% interview lift
Without
With
+31.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
18 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
17.6%
-22.4% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
22.9%
-17.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 124 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 1-3, 8-10, 13-14, 15-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Qi et al. (US 2021/0304055 A1) (“Qi”) (cited in an IDS) in view of Rebuffi et al., “Learning multiple visual domains with residual adapters,” arXiv:1705.08045v5 [cs.CV] 27 Nov 2017 (“Rebuffi”). As to claim 1, Qi teaches a computer-implemented [[0007]: “The system/apparatus may comprise one or more processors and a memory coupled to the one or more processors. The memory may comprise instructions which, when executed by the one or more processors, cause the one or more processors to perform various ones of, and combinations of, the operations outlined above with regard to the method illustrative embodiment.”] method comprising: initializing a machine learning algorithm with a set of hyperparameter values; [[0037]: “For example, an initial set of hyperparameter values for N parameters of a computer model may be set as beta=(param1, param2, . . . paramN), along with an initial upper bound and lower bound for each of these hyperparameter values.” The computer model is a “machine learning computer model” ([0018]).] accessing a hyperparameter objective function that is defined at least in part on a plurality of domains of a search space that is associated with the machine learning algorithm, [[0035]: “As shown in FIG. 1, the AutoML process 110, comprises multiple stages of logic 112-116 which are repeatedly executed until the hyperparameter value settings for the ML model 140 are optimized, e.g., a particular time constraint is met, a particular amount of improvement in the performance of the model, e.g., a loss or error, is equal to or less than a particular threshold, a particular amount of improvement in performance is not able to be achieved, or any other suitable stopping criteria for the AutoML process 110.” See also [0040]-[0041] for examples. That is, the performance measure corresponds to a “hyperparameter objective function.” It is defined on a plurality of domains, since the data used in the training includes a plurality of domains. A “plurality of domains” is disclosed in [0005]: “executing an initial AutoML process on the machine learning model based on a plurality of datasets comprising a plurality of domains of data elements, utilizing the initially configured AutoML logic.” See also [0041]: “The one or more input datasets 130 may comprise data in a variety of different domains, e.g., domains D1-D4.”] wherein the search space comprises training datasets and evaluation datasets with each domain comprising a subdivision of the search space with at least one training dataset and at least one evaluation dataset, and wherein the hyperparameter objective function comprises […]; [[0042]: “In the case of training data, the labels of the data in the datasets 130 further comprise a ground truth classification or output that is a correct output of a properly trained ML model given the corresponding input data. This ground truth output may be compared to the actual output generated by the ML model 140 to evaluate a performance of the ML model 140 with regard to the correct output, e.g., a loss function may be used to calculate an error in the ML model 140 output” [0047]: “…trained on one or more input datasets, e.g., one or more datasets 132-138 having labeled domains, to recognize patterns of content indicative of particular domains and thereby classify data elements of the input datasets into a plurality of predefined domains, e.g., domains D1-D4, where an input dataset 132 may comprise data elements of various domains D1-D4 such that it represents a mixed domain dataset.” In regards to the limitation of an “evaluation dataset” for each domain, the above part of [0042] teaches that the dataset is both the “training dataset” and the “evaluation dataset.” See also [0041], which refers to “evaluate the performance of the ML model 140, the AutoML process 110 comprises a second stage 114” with the use of the input datasets 130.] for each trial of a hyperparameter tuning process: [The disclosed process is for “determination of the best performance set of hyperparameters for configuring the machine learning model” ([0020]). The limitation of “for each trial” does not require a plurality of trials. Nonetheless, it is taught that the process repeats, in the form of multiple trials. See [0066]: “This process may be repeated for each subsequent new dataset 310 received such that the learning of the hyperparameter sampling configuration parameters is continuously or periodically updated.”] training the machine learning algorithm for each domain using the at least one training dataset associated with each domain and the set of hyperparameter values, wherein the training outputs a plurality of machine learning models comprising a machine learning model for each domain; [[0060]: “As shown in FIG. 2, for each of these datasets, a learned value for the hyperparameter param1 is determined through the AutoML process, e.g., AutoML process 110 in FIG. 1.” As noted above, training is described in [0042]: “In the case of training data, the labels of the data in the datasets 130 further comprise a ground truth classification or output that is a correct output of a properly trained ML model given the corresponding input data. This ground truth output may be compared to the actual output generated by the ML model 140 to evaluate a performance of the ML model 140 with regard to the correct output.” Furthermore, as illustrated in FIG. 2, the same domains (e.g., D1 and D2) appear across multiple workspaces, each of which has a corresponding learned value. The limitation of a plurality of machine learning models is described because as shown in FIG. 2, different learned values correspond to different machine learning models.] evaluating the machine learning model for each domain using the at least one evaluation dataset associated with each domain and the set of hyperparameter values, wherein the evaluating comprises generating the […]; [[0058]: “This initial set of hyperparameter sampling configuration parameters 210 are used to perform an initial AutoML process, such as described previously with regard to FIG. 1, and thereby generate for performance metrics for each sampled set of hyperparameter values which are then used to identify a particular setting of hyperparameter values that provide a best, or optimum, performance of the ML model when the ML model is configured with the selected set of hyperparameter values. As shown in FIGS. 2-4, the value of hyperparameter param1 that provides the best performance of the ML model based on the evaluation of the performance metrics generated by this initial AutoML operation is the “learned” value for the hyperparameter.” That is, for each workspace in FIG. 2, the performance is calculated in order to derive the optimal value that is learned for a particular hyperparameter.] calculating, using the hyperparameter objective function, a current trial objective score based on the […] and a domain weight associated with each domain; [[0040]: “the performance of the ML model 140, configured with the hyperparameter values corresponding to the selected set of hyperparameter values (beta*i), is evaluated with regard to one or more performance metrics of interest, e.g., accuracy of the output of the ML model 140 relative to a ground truth as may be measured by a loss function, for example.”] and determining whether the machine learning model has reached convergence based on the current trial objective score; [[0035]: “As shown in FIG. 1, the AutoML process 110, comprises multiple stages of logic 112-116 which are repeatedly executed until the hyperparameter value settings for the ML model 140 are optimized, e.g., a particular time constraint is met, a particular amount of improvement in the performance of the model, e.g., a loss or error, is equal to or less than a particular threshold, a particular amount of improvement in performance is not able to be achieved, or any other suitable stopping criteria for the AutoML process 110.”] and in response to determining the machine learning model has reached convergence, providing at least one of the plurality of machine learning models. [[0035]: “Once the AutoML process 110 completes, a set of learned hyperparameter values are generated that provide an optimum performance of the ML model 140 during training of the ML model 140 and/or runtime deployment of the ML model 140. That is, this set of learned hyperparameters are learned from the AutoML process 110 and may be used to configure an instance of the ML model for training using a training dataset and/or in a runtime environment which processes new workloads using the configured model.”] Qi does not teach the limitation that the hyperparameter objective function comprises “a domain score for each domain that is calculated based on a number of instances within the at least one evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial” and the related limitation of the use of “the domain score for each domain” for the evaluating and the limitation of “based on the domain score for each domain and a domain weight associated with each domain” for calculation of the current trial objective score. Rebuffi teaches “a domain score for each domain that is calculated based on a number of instances within the at least one evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial” [§ 4, paragraph 4: “Performance is measured in terms of a single scalar score S determined as in the decathlon discipline. Performing well at this metric requires algorithms to perform well in all tasks, compared to a minimum level of baseline performance for each. In detail, S is computed as follows: [See equations in expression (1)] where Ed is the average test error for each domain… The coefficient αd is set to 1, 000 (Emaxd)−γd so that a perfect result receives a score of 1,000 (10,000 in total).” That is, referring to equation 1, αdmax{0, Edmax – Ed}γd constitutes a domain score for domain d. Note that as further shown in the formula for Ed, this measures the test error, which is measured over the number of instances in the evaluation set Ddtest.] and thus the related limitation of use of “the domain score for each domain” for evaluating and for calculation of a current trial objective score “based on the domain score for each domain and a domain weight associated with each domain” [As noted above, the individual scores Ed (or max{0, Edmax – Ed}) are used to compute the performance scalar score S and is based on the domain weight αd. This scalar score is analogous to the performance metric disclosed in the base reference.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Qi with the teachings of Rebuffi by implementing domain-specific scoring technique of Rebuffi in the performance measurement process of Qi, specifically by implementing the hyperparameter objective function to comprise “a domain score for each domain that is calculated based on a number of instances within the at least one evaluation dataset that are correctly or incorrectly predicted by the machine learning algorithm during a given trial”, the use of “the domain score for each domain” for the evaluating, and for calculation of the current trial objective score so as to be “based on the domain score for each domain and a domain weight associated with each domain”, so as to arrive at the limitations of the instant claim. The motivation would have been to implement a measure of performance that evaluates a model for multiple-domain learning, in a way that assesses whether a method can successfully learn to perform well in several different domains at the same time, as suggested by Rebuffi, § 4, paragraph 1 (“In this section we introduce a new benchmark, called visual decathlon, to evaluate the performance of algorithms in multiple-domain learning. The goal of the benchmark is to assess whether a method can successfully learn to perform well in several different domains at the same time.”). As to claim 2, the combination of Qi and Rebuffi teaches the computer-implemented method of claim 1, as set forth above. Rebuffi further teaches “wherein the hyperparameter objective function is formulated to normalize instance level improvements and instance level regressions over a total number of instances to obtain an improvement score and a regression score, and wherein the domain score for each domain is calculated based on the improvement score and the regression score.” [As shown in § 4, equation (1) and the text below, the domain score αdmax{0, Edmax – Ed}γd is calculated based on Ed which in turn is computed by a summation that divided by |Ddtest|. Here, |Ddtest| corresponds to the total number of instances. In this context, max{0, Edmax – Ed} is a regression score because it measures the error (regression in performance) of the model, and the domain score is calculated based on the regression score Ed. In regards to the limitation of improvement score, since the claim does not precisely define the meaning of this term, the term max{0, Edmax – Ed}γd is considered to be an improvement score because it also measures improvement over the baseline model represented by Edmax.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Qi and Rebuffi to have also arrived at the limitations of the instant dependent claim. Since the parts of Rebuffi discussed above are part of the techniques discussed in the rejection of the parent independent claim, the rationale for doing so is covered by the one given for the teachings of Rebuffi in the rejection of the parent independent claim. As to claim 3, the combination of Qi and Rebuffi teaches the computer-implemented method of claim 2, as set forth above. Rebuffi further teaches “wherein the improvement score is calculated based on: (i) the number of instances within the at least one evaluation dataset that are correctly predicted by the machine learning model during the given trial, (ii) the number of instances within the at least one evaluation dataset that are incorrectly predicted by a baseline machine learning model, and (iii) a total count of the number of instances within the at least one evaluation dataset, wherein the regression score is calculated based on: (i) the number of instances within the at least one evaluation dataset that are incorrectly predicted by the machine learning model during the given trial, (ii) the number of instances within the at least one evaluation dataset that are correctly predicted by a baseline machine learning model, and (iii) the total count of the number of instances within the at least one evaluation dataset.” [As shown in Rebuffi, equation (1) score Ed is calculated based on the number of instances that are incorrectly predicted (summation in equation (1), as indicated by “y≠Φ(x,d)” which represents incorrect predictions) in a given trial, and the total number of instances, as represented by Ddtest. Since each instance is predicted in a manner that is either correct or incorrect, Ed and all resulting factors (including the regression score max{0, Edmax – Ed} and the improvement score max{0, Edmax – Ed}γd) are based on both the number of correct and incorrect predictions by the model in question. Furthermore, Edmax corresponds to a baseline error rate, which likewise is based on both the number of correct and incorrect predictions. Note that the baseline error rate is computed from the results of an actual model. See caption of Table 1: “The fully-finetuned model, written blue, is used as a baseline to compute the decathlon score.” Furthermore. Since Ed is normalized by |Ddtest|, the scores are also based on the total count of the number of instances in the set D.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Qi and Rebuffi to have also arrived at the limitations of the instant dependent claim. Since the parts of Rebuffi discussed above are part of the techniques discussed in the rejection of the parent independent claim, the rationale for doing so is covered by the one given for the teachings of Rebuffi in the rejection of the parent independent claim. As to claim 6, the combination of Qi and Rebuffi teaches the computer-implemented method of claim 1, as set forth above. Rebuffi further teaches “wherein the hyperparameter objective function is formulated to include a parameter for an acceptable regression ratio, m, for one or more of the plurality of domains.” [§ 4, equation (1) and text below: “Edmax the baseline error (section 5), above which no points are scored.” That is, Edmax constitutes an acceptable regression ratio, since if the error (regression) is too high, no points are scored. Additionally, referring equation 1, (Edmax – Ed) corresponds to a parameter for that regression ratio for the domain d, and Edmax is a ratio because Ed is also a ratio under the definition of equation (1), in which Ed represents the percentage of errors.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Qi and Rebuffi to have also arrived at the limitations of the instant dependent claim. Since the parts of Rebuffi discussed above are part of the techniques discussed in the rejection of the parent independent claim, the motivation for doing so is covered by the one given for the teachings of Rebuffi in the rejection of the parent independent claim. As to claim 7, the combination of Qi and Rebuffi teaches the computer-implemented method of claim 6, as set forth above. Rebuffi further teaches “wherein the parameter is defined based on a regression score, and if the regression score is less that the acceptable regression ratio, m, then the regression score is set to zero, otherwise the regression score is calculated based on: (i) the number of instances within the at least one evaluation dataset that are incorrectly predicted by the machine learning model during the given trial, (ii) the number of instances within the at least one evaluation dataset that are correctly predicted by a baseline machine learning model, and (iii) the total count of the number of instances within the at least one evaluation dataset.” [As shown in Rebuffi, equation (1) and the following text, the parameter (Edmax – Ed) is defined based on a regression (error) score Ed, where the expression max{0, Edmax – Ed} indicates that the score Ed is less than the acceptable ratio Edmax, the score (note that both Edmax and the whole “max” expression corresponds to the score) is set to zero by the operation max{0, Edmax – Ed}. Otherwise, as shown in equation (1), the score Ed (and the max expression) is calculated based on the number of instances that are incorrectly predicted (summation in equation (1), as indicated by “y≠Φ(x,d)” which represents incorrect predictions) in a given trial, and the total number of instances, as represented by Ddtest. Furthermore, Edmax corresponds to a baseline error rate. Since an error rate is based on the number of incorrect predictions, an error rate is also incorrectly based on the number of correct predictions. Note that the baseline error rate is computed from the results of an actual model. See caption of Table 1: “The fully-finetuned model, written blue, is used as a baseline to compute the decathlon score.”] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Qi and Rebuffi to have also arrived at the limitations of the instant dependent claim. Since the parts of Rebuffi discussed above are part of the techniques discussed in the rejection of the parent independent claim, the motivation for doing so is covered by the one given for the teachings of Rebuffi in the rejection of the parent independent claim. As to claims 8-10 and 13-14, these claims are directed to a system for performing operations that are the same or substantially the same as those of claims 1-3 and 6-7. Therefore, the rejections made to claims 1-3 and 6-7 are applied to claims 8-10 and 13-14, respectively. Furthermore, Qi teaches “a system comprising: one or more processors; and one or more computer-readable media storing instructions which, when executed by the one or more processors, cause the system to perform operations” [[0007]: “The system/apparatus may comprise one or more processors and a memory coupled to the one or more processors. The memory may comprise instructions which, when executed by the one or more processors, cause the one or more processors to perform various ones of, and combinations of, the operations outlined above with regard to the method illustrative embodiment.”] As to claims 15-17 and 20, these claims are directed to a computer-readable medium for performing operations that are the same or substantially the same as those of claims 1-3 and 7. Therefore, the rejections made to claims 1-3 and 7 are applied to claims 15-17 and 20, respectively. Furthermore, Qi teaches “one or more non-transitory computer-readable media storing instructions which, when executed by one or more processors, cause a system to perform operations” [[0006]: “a computer program product comprising a computer useable or readable medium having a computer readable program is provided. The computer readable program, when executed on a computing device, causes the computing device to perform various ones of, and combinations of, the operations outlined above with regard to the method illustrative embodiment.”] 2. Claims 4-5, 11-12, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Qi in view of Rebuffi, and further in view of Mindermann et al., “Prioritized Training on Points that are learnable, Worth Learning, and Not Yet Learnt,” Proceedings of the 39th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 17-23 July 2022 (“Mindermann”). As to claim 4, the combination of Qi and Rebuffi teaches the computer-implemented method of claim 1, but does not teach the further limitations of the instant dependent claim that the hyperparameter objective function is formulated to “exclude unstable instances, which are instances within the at least one evaluation dataset that are determined to yield prediction results that differ from one another using a same machine learning model.” Mindermann teaches “exclude unstable instances, which are instances within the at least one evaluation dataset” [Page 4, first paragraph: “We now provide intuition on why reducible holdout loss selection (RHOLOSS) avoids redundant, noisy, and less relevant points… ii) Noisy points. While prior methods select based on high training loss (or gradient norm), not all points with high loss are informative—some may have an ambiguous or incorrect (i.e. noisy) label. The labels of such points cannot be predicted using the holdout set (Chen et al., 2019). Such points have high IL and, consequently, low reducible loss. These noisy points are less likely to be selected compared to equivalent points with less noise.” As shown in Algorithm 1 on page 3, the selection of the points in line 8 is based on the top nb, which includes “nosiy points” discussed above. Note that the noisy points as defined here are those that have an ambiguous label, and are thus unstable in the sense that they cannot be predicted in a manner that results in effective training in terms of accuracy. In regards to the context that the hyperparameter objective function is formulated for such exclusion, the Examiner notes that the context of the existing reference already teaches the formulation of the objective function in a manner that is dependent on training.] “that are determined to yield prediction results that differ from one another using a same machine learning model” [As shown in Algorithm 1 on page 3, liens 4-10 are part of a training process, with line 10 being the parameter update process. Since the model is updated over time, meaning that the result of the model varies depending on its parameters, the feature that the same point results in different predictions for the same model at different stages of the training process would merely be workable ranges of the predictions discoverable under routine experimentation. The Examiner notes that the instant claim does not define the reason that the prediction differs, nor does it define the circumstances in which the differing results are calculated. Therefore, when the teachings of this reference are applied to the existing reference, the instant limitation would have been obvious as a discovery of workable range of predictions over different training iterations in routine experimentation. See MPEP § 2144.05(II)(A), stating that where the general conditions of a claim are disclosed in the prior art, it is not inventive to discover the optimum or workable ranges by routine experimentation.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of the references combined thus far with the teachings of Mindermann by implementing the training process of Qi, as modified thus far, to use the reducible holdout loss selection technique taught in Mindermann, so as to arrive at the limitations of the instant claim. The motivation would have been to avoid points that are not learnable, so as to increase accuracy and training speed. See Mindermann, abstract: “In contrast, RHO-LOSS selects points that are learnable, worth learning, and not yet learnt. RHO-LOSS trains in far fewer steps than prior art, improves accuracy, and speeds up training on a wide range of datasets, hyperparameters, and architectures (MLPs, CNNs, and BERT).” As to claim 5, the combination of Qi, Rebuffi, and Mindermann teaches the computer-implemented method of claim 4, as set forth above. Mindermann further teaches “wherein excluding unstable instances comprises: (i) subtracting a count of the excluded unstable instances from the number of instances within the at least one evaluation dataset that are correctly predicted by the machine learning model during the given trial, and (ii) subtracting the count of the excluded unstable instances from the number of instances within the at least one evaluation dataset that are incorrectly predicted by the machine learning model during the given trial.” [In Algorithm 1, line 8 samples the top-nb samples, which therefore subtracts out the count of the excluded unstable samples from all measures of performance, whether they are correctly or incorrectly predicted.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Qi, Rebuffi, and Mindermann to have also arrived at the limitations of the instant dependent claim. Since the parts of Mindermann discussed above are part of the techniques discussed in the rejection of the parent dependent claim, the motivation for doing so is covered by the one given for the teachings of Mindermann in the rejection of the parent dependent claim. As to claims 11-12 and 18-19, the further limitations recited in these claims are the same or substantially the same as those of claims 4-5. Therefore, the rejections made to claims 4-5 are applied to claims 11-12 and 18-19. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following document depicts the state of the art and related techniques. Dietterich, “Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms,” Oregon State University, December 30, 1997 teaches various metrics for comparing models, including those that account for correct/incorrect predictions of a different model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YAO DAVID HUANG whose telephone number is (571)270-1764. The examiner can normally be reached Monday - Friday 9:00 am - 5:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Y.D.H./Examiner, Art Unit 2124 /MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

May 15, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §103
Mar 31, 2026
Examiner Interview (Telephonic)
Apr 01, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536455
Method for Early Warning Brandish of Transmission Wire Based on Improved Bayes-Adaboost Algorithm
2y 5m to grant Granted Jan 27, 2026
Patent 12517958
SYSTEM AND METHOD FOR NEXT STEP PREDICTION OF ICS FLOW USING ARTIFICIAL INTELLIGENCE/MACHINE LEARNING
2y 5m to grant Granted Jan 06, 2026
Patent 12518218
DYNAMICALLY SCALABLE MACHINE LEARNING MODEL GENERATION AND RETRAINING THROUGH CONTAINERIZATION
2y 5m to grant Granted Jan 06, 2026
Patent 12488279
DOMAIN-SPECIFIC CONSTRAINTS FOR PREDICTIVE MODELING
2y 5m to grant Granted Dec 02, 2025
Patent 12475373
INFORMATION PROCESSING APPARATUS AND METHOD AND PROGRAM FOR GENERATING INTEGRATED MODEL
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
63%
Grant Probability
95%
With Interview (+31.9%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 124 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month