Prosecution Insights
Last updated: April 19, 2026
Application No. 18/275,310

Method and Apparatus for Selecting Machine Learning Model for Execution in a Resource Constraint Environment

Non-Final OA §101§103
Filed
Aug 01, 2023
Examiner
SAX, STEVEN PAUL
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
320 granted / 460 resolved
+14.6% vs TC avg
Strong +45% interview lift
Without
With
+44.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
20 currently pending
Career history
480
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
62.5%
+22.5% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 460 resolved cases

Office Action

§101 §103
Detailed Action Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. The Preliminary Amendment filed 8/1/23 has been entered. Claims 1-20 have been cancelled. Claims 21-39 are pending. Claim Rejections - 35 USC § 101 3. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4. Claims 21-39 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more and thus is directed to non-patentable subject matter. Specifically, the claims are directed toward the judicial exception of an abstract idea without reciting additional elements that amount to significantly more than the judicial exception. The rationale for this determination is in accordance with the guidelines of USPTO, applies to all statutory categories, and is explained in detail below. When considering subject matter eligibility under 35 U.S.C. 101, (1) it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, (2a) it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so (2b), it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas include certain methods of organizing human activities; a mental processes; and mathematical concepts. STEP 1: Per Step 1 of the two-step analysis, the claims are determined to include a method (independent claim 21), an apparatus (independent claim 30) respectively and in the therefrom dependent claims. Therefore, the claims are directed to a statutory eligibility category. Step 2A, Prong 1: The independent claims 21 and 30 recite: calculating a complexity of each machine learning model in the first set of machine learning models (a person can mentally use a mathematical formula to calculate a particular value representing a metric regarding each set of instructions and then label each set with the corresponding metric value); requesting resource constraints from the execution environment (this may simply be mentally acquiring various data to use in the algorithm or formula); determining, from the first set of machine learning models a second set of machine learning models with at least one suitable machine learning model to be deployed, wherein the determining is based on the calculated complexity and the user defined constraints received from the execution environment (a person can mentally compare the results of data evaluation [including labels associated with the evaluated data] organized with other data and choose particular sets of instructions based on the comparisons). With regard to the dependent claims: Regarding claims 22 and 33, a person may mentally assign rank labels to sets of instructions and select a set with the highest rank. Regarding claims 23 and 34, this just describes how the mental function or algorithm used to choose the sets of instructions is based on the various calculated values and inputted data. Regarding claims 24 and 35, note the alternative language and that this just describes how the mental function or algorithm may be rule-based policy, which again may comprise mental steps. Regarding claims 25 and 36, this again may be mental steps to determine whether a set of instructions is used or not. Regarding claims 26 and 37, this just shows that the mental choosing step includes using a rule based policy, which may comprise mental steps, and describes how it may pick a certain set of instructions for varying different evaluated values and data inputs. Regarding claims 27 and 38, this just describes some of the inputted data values. Regarding claims 28, this just describes some of the parameters used in calculating/evaluating the values . Regarding claim 29, this just describes the execution environment in which the mental steps determine whether the set of instructions should be deployed. In other words, these environments are just labels represented by data and used in the mental step algorithm. Claims 31 and 39 merely describe generic computer components such as the processor or apparatus. All these claim features may be accomplished by applying particular calculations, groupings, inspection, and general manipulation of data. The invention is thus directed to mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Step 2A, Prong 2 This judicial exception is not integrated into a practical application. This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The components “the apparatus”, ”processor”, “memory”, and “execution environment… radio base station, IoT device, edge computer” are applying the algorithm to well-known generic computers. If the execution environment is given physical interpretation, then “requesting resource constraints from the execution environment” is also applying the algorithm to well-known generic computers. “resource shortage function trained based on …. ”, “receiving a request for a machine learning model solving a task using a feature set”, and “retrieving, from a model store, a first set of machine learning models that solve the task using at least a subset of features” are insignificant extra solution activities and are simply describing the data used in the algorithm. Each claim as a whole, looking at the additional elements individually and in combination, does not integrate the judicial exception into a practical application. In addition, all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05. It is used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using any generic computer. See MPEP 2106.05(f). The limitations provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. Thus, under Step 2A, the Examiner holds that the claims are directed to concepts identified as abstract ideas. STEP 2B. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. Regarding “resource shortage function trained based on …. ”, “requesting resource constraints from the execution environment”, “receiving a request for a machine learning model solving a task using a feature set”, and “retrieving, from a model store, a first set of machine learning models that solve the task using at least a subset of features,” these insignificant extra solution activities are well understood routine and conventional activities. See Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362. Regarding “execution environment… radio base station, IoT device, edge computer”, these are labels for data that are used in the algorithm, and the claimed invention itself does not need to even include these physical pieces of equipment. Thus, the limitations remain mere instructions to apply the judicial exception using a generic computer. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claim(s) 21-39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Appel et al “Appel” (US 12159238 B2) and Ormont et al “Ormont” (US 10990568 B2). (Please see the attached copies of Appel and Ormont that number paragraphs in the same format as that used in this Action). 6. Regarding claim 21, Appel shows a method, performed by an apparatus, for selecting a machine learning model to be deployed in an execution environment having resource constraints (para 4-5, 30, 42 shows general selection of the machine learning model to be deployed in a cloud computing environment based on resource consumption constraints), the method comprising: receiving a request for a machine learning model solving a task using a feature set (para 42, 52, 62 show receiving the request for the machine learning model to solve a task and para 60 shows extracting a feature set for the model to use to solve/implement the task); retrieving, from a model store, a first set of machine learning models that solve the task using at least a subset of features (para 12, 17, 40 show architecture search for machine learning models to perform a task, para 52, 58, 61, 65 show finding and analyzing a plurality of machine learning models to solve the task using the subset of features. Para 21, 38-40 show model store structures such as the cloud servers and other storage hardware from which the models may be selected); calculating a complexity of each machine learning model in the first set of machine learning models (para 52, 59 show ways of calculating complexity for the machine learning models); requesting user defined constraints regarding the execution environment (para 17, 64-65 show obtaining user defined constraints in the architecture executing the models); determining, from the first set of machine learning models a second set of machine learning models with at least one suitable machine learning model to be deployed, wherein the determining is based on the calculated complexity and the user defined constraints received from the execution environment (para 11, 43, 58 show selecting the model based on inference time, para 52, 59 show the inference time is based on complexity, and para 65 show selecting those particular models based on the inference time [hence complexity] and user defined constraints). Appel does not explicitly show that the constraints are resource constraints directly from the execution environment per se, but para 30, 42 do mention resource consumption restraints as a factor in the architecture. Furthermore, Ormont para 19, 23, 31 show selecting a machine learning model based on constraints which may include resource constraints from the execution environment such as bandwidth, processing ability. It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have based the constraint selection on resource constraints from the execution environment as is done in Ormont, in the method of Appel, because it would provide an efficient and useful way to select a machine learning model based on constraint that would accommodate the model’s complexity within the particular execution environment. 7. Regarding claim 22, in addition to that mentioned for claim 21, Appel does select a machine learning model to be deployed as explained above, but Appel does not explicitly show assigning a rank to each machine learning model in the second set of machine learning models based on their historical predictive performance; and selecting a machine learning model with a highest rank. However, Ormont does show assigning a rank to each machine learning model in the second set of machine learning models based on their historical predictive performance; and selecting a machine learning model with a highest rank (para 16, 19, 31-32 show predicting performance metrics based on a prior data set and ordering the models based on predicted efficacy to select a best one). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have this in the method of Appel, because it would provide an efficient and useful way to select a machine learning model in view of the constraints and model’s complexity within the particular execution environment. 8. Regarding claim 23, in addition to that mentioned for claim 21, Appel does not explicitly show the determining comprises performing a resource shortage function on each machine learning model from the first set of machine learning models to form the second set of machine learning model such that the resource shortage function is trained based on the calculated complexity and resource constraints as inputs to determine suitability of each machine learning model (for deployment), but Appel does show how complexity is used in determining the second set of models as being suitable for deployment as explained above, and Appel para 39, 42 show the management player providing functions to make determinations. Ormont however does show the determining comprises performing a resource shortage function on each machine learning model from the first set of machine learning models to form the second set of machine learning model (para 3, 5, 13, 19, 38 show the functionality to use performance prediction to judge and select out which machine learning models of the first set are optimal – these then form a second set) such that the resource shortage function is trained based on the calculated parameters and resource constraints as inputs to determine suitability of each machine learning model (for deployment) (para 13, 19, 34, 35 show the functionality performs this with the trained judge based on constraints, modeling intent, and other hyper-parameters). It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention to have this in the method of Appel, because it would provide an efficient and useful way to select the second set of machine learning models in a method that selects machine learning models in view of constraints and the model’s complexity. Using the trained judge of Ormont based on constraints, and compatible to be based on other parameters, would be a useful tool to help Appel, which already determines suitability based on constraints and the parameter of model complexity, to select the models most suitable for deployment. 9. Regarding claim 24, in addition to that mentioned for claim 23, Ormont shows the resource shortage function is one of a machine learning function or a rule-based policy (note the alternative recitation – nevertheless Ormont shows the machine learning function [para 5, 13, 19, 35 show the trained judge machine learning functionality predicting the pipeline performance] and the rule based policy [para 31, 34, 38 show the constraint rule to control which pipelines are selected]. Motivation to combine the resource shortage function as described in the method of Appel is the same as that mentioned for claim 23. 10. Regarding claim 25, in addition to that mentioned for claim 23, the resource shortage function is a neural network configured to determine a suitability of each machine learning model for deployment from the first set of machine learning models (Ormont para 19 shows the trained judge is itself a machine learning model, and Appel 17, 52, 58 show neural networks as convenient and efficient machine learning models. Appel para 47 also shows predicting inference time and this is performed by machine learning, which per para 17, 52, 58 may be a neural network. Given the combination of Appel with Ormont - motivation being the same as that given for claim 23 – the machine learning model in Ormont would be a neural network. Having it would provide an efficient way to apply a machine learning model). 11. Regarding claim 26, in addition to that mentioned for claim 21, the step of determining comprises executing the rule-based policy on each machine learning model from the first set of machine learning models, where the rule-based policy defines a preferred machine learning model for varying measures of the complexity value and the resource constraints (Appel para 11, 43, 58 show selecting the model based on inference time, para 52, 59 show the inference time is based on complexity, and para 65 show selecting those optimal or preferred model architectures based on rules regarding inference time [hence complexity] and user defined constraints. As explained for claim 21, the constraints in view of Ormont may be resource constraints). 12. Regarding claim 27, in addition to that mentioned for claim 21, the resource constraints comprise at least one of hardware constraints, software constraints, sampling requirements, active user equipment's and resource usage of the execution environment (note the alternative recitation – Ormont para 19, 23, 31 show the resource constraints include hardware and software constraints and give examples such speed, bandwidth and more). Motivation to combine Appel with Ormont is the same as that mentioned for claim 21. 13. Regarding claim 28, in addition to that mentioned for claim 21, the complexity of each machine learning model is computed based on parameters comprising at least one of model type, model size, training method, number of input features, and feature-sampling cost (note the alternative recitation – Appel para 59 show the complexity may be based on model size such as number of layers, para 61 show complexity based on different model types). 14. Regarding claim 29, in addition to that mentioned for claim 21, the execution environment comprises a radio base station, an IoT device, and an edge computer (Appel para 77 shows the edge server, para 51, 71, 77 show the wireless (radio) station communication unit, and para at least para 77-78 show computer devices connected via the Internet). 15. Claims 30 and 33-38 show the same features as claims 21-27 respectively, and are rejected for the same reasons. In addition, Appel para 69 shows the memory with program executable by a processor to perform the method steps. 16. Regarding claim 31, the model store is a component of the apparatus (Appel para 21, 38-40 show model store structures such as the cloud servers and other storage hardware from which the models may be selected). 17. Regarding claim 32, the model store may be a separate entity configured to communicate with the apparatus (Appel para 23, 25, 29-30 show various model store structures that may be separate from the apparatus controlling the particular environment in which they may be selected for deployment). 18. Regarding claim 39 show the same features as claim 21 and is rejected for the same reasons. In addition, Appel para 69 shows the memory (non-transitory computer readable medium) with program instructions executable by a processor to perform the method steps. 19. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: a) Agrawal (US 11544494 B2) shows techniques for selection of machine learning models based on performance predictions by trained algorithm-specific neural network regressor. b) Kamkar (CA 3134043 A1) shows selection of an adversarial trained model that better satisfies accuracy and fairness metrics. 20. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEVEN PAUL SAX whose telephone number is (571)272-4072. The examiner can normally be reached Monday - Friday, 9:30 - 6:00 Est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed, can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEVEN P SAX/Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Aug 01, 2023
Application Filed
Mar 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602537
METHODS FOR SERVING INTERACTIVE CONTENT TO A USER
2y 5m to grant Granted Apr 14, 2026
Patent 12596343
GRAPHICAL ELEMENT SEARCH TECHNIQUE SELECTION, FUZZY LOGIC SELECTION OF ANCHORS AND TARGETS, AND/OR HIERARCHICAL GRAPHICAL ELEMENT IDENTIFICATION FOR ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12547922
BENCHMARK-DRIVEN AUTOMATION FOR TUNING QUANTUM COMPUTERS
2y 5m to grant Granted Feb 10, 2026
Patent 12541708
TRUSTED AND DECENTRALIZED AGGREGATION FOR FEDERATED LEARNING
2y 5m to grant Granted Feb 03, 2026
Patent 12524691
CENTRAL CONTROLLER FOR A QUANTUM SYSTEM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+44.8%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 460 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month