Prosecution Insights
Last updated: April 19, 2026
Application No. 18/305,712

METHOD AND ELECTRONIC DEVICE FOR AUTOMATED MACHINE LEARNING MODEL RETRAINING

Non-Final OA §101§102§103
Filed
Apr 24, 2023
Examiner
LAHAM BAUZO, ALVARO SALIM
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-21.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
30
Total Applications
across all art units

Statute-Specific Performance

§101
32.4%
-7.6% vs TC avg
§103
44.3%
+4.3% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. IN202241044197, filed on August 2, 2022. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on April 23, 2023, April 16, 2024, November 25, 2024, and October 22, 2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner. Claim Objections Claims 6 and 16 are objected to because the claims recite "comparing [...] based on the threshold configurations". While claim 1 and claim11 recite a pre-defined threshold, the threshold used in claim 6 for comparing network slice configurations appears to be based on slice similarity (see paragraph [0070] “At S502, the TLM (113) detects slice similarity by comparing slice configuration parameters based on threshold configurations”). As written, it is unclear if claims 6 and 16 refer to the pre-defined threshold in respective parent claims 1 and11, or a distinct threshold. For purposes of examination, the threshold of claims 6 and 16 will be construed as a distinct threshold from the threshold recited in claims 1 and 11. Claim 13 is objected to because the claim recites “regrading”. This appears to be a typographical error. Examiner will construe “regrading” as “regarding”. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C.101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-10 are directed to a process. Claims 11-20 are directed to a machine or an article of manufacture. With respect to claim(s) 1 and 11: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: identifying/identify information on an accuracy degradation of the first ML model for a network system […] (Mental process – A person can manually determine accuracy degradation in the mind or with the physical aid of a pen and paper (see paragraph [0006]). – see MPEP § 2106.04(a)(2)(III)) identifying/identify […] that a predicted accuracy degradation corresponds to a pre-defined threshold based on the information on the accuracy degradation of the first ML model; and (Mathematical concepts and/or mental process – Identifying that a predicted accuracy degradation corresponds to a pre-defined threshold based on the information on the accuracy degradation involves mathematical comparison logic. Additionally, a person can identify that a predicted accuracy degradation (e.g., predicted accuracy degradation value) corresponds to a pre-defined threshold (i.e., comparing whether the predicted value is equal to, below, or above a specific value) in the mind or with the physical aid of pen and paper – see MPEP § 2106.04(a)(2)) If claim limitations, under their broadest reasonable interpretation, cover performance of the limitations as a mental process, but for the recitation of generic computer components, then the claim limitations fall within the mathematical or mental process grouping of abstract ideas. Accordingly, the claim “recites” an abstract idea. 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 1) A method for automated Machine Learning (ML) model training by an electronic device comprising at least one processor, the method comprising: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 11) An electronic device for automated Machine Learning (ML) model training, wherein the electronic device comprises: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 11) a memory; (Mere recitation of a generic computer component – see § MPEP 2106.05(b)(I)) (Claim 11) at least one processor; and (Mere recitation of a generic computer component – see § MPEP 2106.05(b)(I)) (Claim 11) a proactive training engine, coupled to the memory and the at least one processor, the proactive training engine comprising circuitry, the proactive training engine configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) running/run a first ML model and a second ML model; (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) […] using the second ML model; (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 1) […] by the electronic device […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) training/train the first ML model based on the identifying that the predicted accuracy degradation corresponds to the pre-defined threshold. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea. 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 1) A method for automated Machine Learning (ML) model training by an electronic device comprising at least one processor, the method comprising: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 11) An electronic device for automated Machine Learning (ML) model training, wherein the electronic device comprises: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 11) a memory; (Mere recitation of a generic computer component – see § MPEP 2106.05(b)(I)) (Claim 11) at least one processor; and (Mere recitation of a generic computer component – see § MPEP 2106.05(b)(I)) (Claim 11) a proactive training engine, coupled to the memory and the at least one processor, the proactive training engine comprising circuitry, the proactive training engine configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) running/run a first ML model and a second ML model; (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) […] using the second ML model; (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 1) […] by the electronic device […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) training/train the first ML model based on the identifying that the predicted accuracy degradation corresponds to the pre-defined threshold. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible. With respect to claim(s) 2 and 12: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: wherein the accuracy degradation is due to unplanned events occurring in the first ML model. (Mental process – A person can manually determine that accuracy degradation is due to unplanned events in the mind or with the physical aid of pen and paper (see paragraph [0006]). – see MPEP § 2106.04(a)(2)(III)) Additionally, the claim(s) do not recite any new additional elements that would amount to an integration of the abstract idea into a practical application (individually or in combination) or significantly more than the judicial exception. Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. With respect to claim(s) 3 and 13: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: (Claim 3) identifying/identify the information on the accuracy degradation of the first ML model using the second ML model, comprises/at least by: (Mental process – A person can manually determine accuracy degradation in the mind or with the physical aid of pen and paper (see paragraph [0006]). – see MPEP § 2106.04(a)(2)(III)) identifying […] the information on the accuracy degradation of the first ML model based on analyzing the data regarding the accuracy of the first ML model with the second ML model. (Mental process – A person can manually determine accuracy degradation in the mind or with the physical aid of a pen and paper (see paragraph [0006]). – see MPEP § 2106.04(a)(2)(III)) 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 13) wherein the engine is configured to […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) receiving, by the electronic device, data regarding accuracy of the first ML model comprising at least one of: a model type, parameters and hyper parameters, network nodes, cell models, slice/cell configuration information, existing models that can be used for transfer learning, model training time, model prediction accuracies, resources used for model training, extraction times, time window of data extraction, data generation patterns, model accuracy data, and execution time for each training pipeline; (Mere data gathering – Adding insignificant extra-solution activity of mere data gathering to the judicial exception – see § MPEP2106.05(g).) storing […] the data regarding the accuracy of the first ML model to a database; (Adding insignificant extra-solution activity to the judicial exception – see § MPEP2106.05(g).) (Claim 3) […] by the electronic device […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 13) wherein the engine is configured to […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) receiving, by the electronic device, data regarding accuracy of the first ML model comprising at least one of: a model type, parameters and hyper parameters, network nodes, cell models, slice/cell configuration information, existing models that can be used for transfer learning, model training time, model prediction accuracies, resources used for model training, extraction times, time window of data extraction, data generation patterns, model accuracy data, and execution time for each training pipeline; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).) storing […] the data regarding the accuracy of the first ML model to a database; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).) (Claim 3) […] by the electronic device […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. With respect to claim(s) 4 and 14: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: identifying […] an expected time for completion of the model training and data extraction based on data of accuracy stored in a database; (Mental process – A person can mentally identify an expected time for completion of the model training or with the physical aid of a pen and paper – see MPEP § 2106.04(a)(2)(III)) identifying […] incoming requests of the first ML model; (Mental process – A person can mentally identify incoming requests of the first ML model – see MPEP § 2106.04(a)(2)(III)) identifying […] resources and resource constraints; (Mental process – A person can mentally identify resources and resource constraints – see MPEP § 2106.04(a)(2)(III)) creating […] a plan for training the first ML model based on the identified incoming requests, the expected time for completion of the training, the identified resources, and the resource constraints; (Mental process – A person can create a plan for training a model based on identified requests, expected training completion time, identified resources and resources constraints in the mind or with the physical aid of a pen and paper – see MPEP § 2106.04(a)(2)(III)) 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 14) wherein the engine is configured to train the first ML model at least by: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) obtaining […] the data of accuracy of the first ML model from the database; (Adding insignificant extra-solution activity of data gathering to the judicial exception – see § MPEP2106.05(g).) scaling […] up/down and/or in/out the resources based on the created plan; and (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) triggering […] training of the first ML model based on the created plan. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 4) […] by the electronic device […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 14) wherein the engine is configured to train the first ML model at least by: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) obtaining […] the data of accuracy of the first ML model from the database; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).) scaling […] up/down and/or in/out the resources based on the created plan; and (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) triggering […] training of the first ML model based on the created plan. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 4) […] by the electronic device […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. With respect to claim(s) 5 and 15: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: identifying […] a network slice with the first ML model similar to the new network slice and capable for transfer learning; (Mental process – A person can mentally identify a network slice with the first ML model similar to the new network slice and capable for transfer learning – see MPEP § 2106.04(a)(2)(III)) identifying […] super models of the first ML model used for transfer learning, and remaining layers of the first ML model to be trained based on inputs from a model registry; (Mental process – A person can mentally identify super models of the first ML model used for transfer learning, and remaining layers of the first ML model to be trained based on inputs from a model registry – see MPEP § 2106.04(a)(2)(III)) 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 5) wherein the training the first ML model comprises: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 15) wherein the engine is configured to train the first ML model at least by: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) receiving […] a request to configure a ML service with a new network slice; (Adding insignificant extra-solution activity to the judicial exception – see § MPEP2106.05(g).) triggering […] training of the remaining layers of the first ML model. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 5) […] by the electronic device […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 5) wherein the training the first ML model comprises: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 15) wherein the engine is configured to train the first ML model at least by: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) receiving […] a request to configure a ML service with a new network slice; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).) triggering […] training of the remaining layers of the first ML model. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) (Claim 5) […] by the electronic device […] (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. With respect to claim(s) 6 and 16: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: comparing/compare configuration parameters of a new network slice and configuration parameters of a plurality of network slices based on the threshold configurations; (Mental process – A person can compare configuration parameters of a new network slice and configuration parameters of a plurality of network slices based on the threshold configurations in the mind or with the physical aid of a pen and paper – see MPEP § 2106.04(a)(2)(III)) and identifying/identify, based on the comparing configuration parameters of the new network slice and configuration parameters of the plurality of network slices, a network slice with the first ML model similar to the new network slice and capable for transfer learning. (Mental process – A person can mentally identify, based on the comparing configuration parameters of the new network slice and configuration parameters of the plurality of network slices, a network slice with the first ML model similar to the new network slice and capable for transfer learning – see MPEP § 2106.04(a)(2)(III)) 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 16) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 16) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. With respect to claim(s) 7 and 17: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: identifying first time interval to training the first ML model and second time interval to obtain data for training the first ML model; (Mental process – A person can mentally identify a first time interval to training the first ML model and second time interval to obtain data for training the first ML model – see MPEP § 2106.04(a)(2)(III)) identifying, based on the first time interval and the second time interval, a third time interval for training the first ML model; and (Mental process – A person can mentally identify, based on the first time interval and the second time interval, a third time interval for training the first ML model – see MPEP § 2106.04(a)(2)(III)) 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 17) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) obtaining/obtain, from a database, the information on the accuracy degradation of the first ML model; (Adding insignificant extra-solution activity of data gathering to the judicial exception – see § MPEP2106.05(g).) during the third time interval, training/train the first ML model. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 17) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) obtaining/obtain, from a database, the information on the accuracy degradation of the first ML model; (Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC)- see MPEP § 2106.05(d)(ll)(i) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information).) during the third time interval, training/train the first ML model. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. With respect to claim(s) 8 and 18: 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 18) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) scaling/scale up a resource for training the first ML model; and training, based on the resource scaled up, the first ML model. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 18) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) scaling/scale up a resource for training the first ML model; and training, based on the resource scaled up, the first ML model. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. With respect to claim(s) 9 and 19: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: identifying/identify that traffic pattern is changed; and (Mental process – A person can mentally identify that traffic pattern is changed – see MPEP § 2106.04(a)(2)(III)) identifying/identify that information on key performance data (KPI) is to be changed based on the identifying that traffic pattern is changed. (Mental process – A person can mentally identify that information on key performance data (KPI) is to be changed based on the identifying that traffic pattern is changed – see MPEP § 2106.04(a)(2)(III)) 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 19) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 19) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. With respect to claim(s) 10 and 20: 2A Prong 1: The claim(s) recite(s) an abstract idea. Specifically: identifying/identify at least one configuration parameter of a network slice related to the first ML model; and (Mental process – A person can mentally at least one configuration parameter of a network slice related to the first ML model – see MPEP § 2106.04(a)(2)(III)) 2A Prong 2: The additional elements recited in the claim(s) do not integrate the abstract idea into a practical application, individually or in combination. Additional elements: (Claim 20) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) training/train, based on the at least one configuration parameter is changed, the first ML model. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) 2B: The claim(s) do(es) not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: (Claim 20) wherein the engine is further configured to: (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) training/train, based on the at least one configuration parameter is changed, the first ML model. (Mere instructions to apply an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea – see MPEP 2106.05(f).) Since the claim does not recite additional elements that either integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception, the claim is not patent eligible. Therefore, the claim is not patent eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3 and 11-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by KINGETSU (US 20220207307 A1), hereafter KINGETSU. Regarding Claim 1: KINGETSU teaches: A method for automated Machine Learning (ML) model training by an electronic device comprising at least one processor, the method comprising: (KINGETSU [0099] teaches: "The control unit 150 includes a training unit 151, the creating unit 152, a detection unit 153, and a prediction unit 154. The control unit 150 is able to be implemented by a central processing unit (CPU) (i.e., an electronic device comprising a processor), a micro processing unit (MPU), or the like. Furthermore, the control unit 150 is also able to be implemented by hard wired logic, such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA).") running a first ML model and a second ML model; (KINGETSU [0069] teaches: "[...] it is possible to use the Teacher model 7A as a machine learning model (i.e., a first ML model) corresponding to the monitoring target and it is possible to use the Student model 7B as an inspector model (i.e., a second ML model)." Examiner's note: running a [...] model can be interpreted as using the teacher and student model for performing its intended function, such as the teacher model outputting classification results and the inspector model detecting accuracy degradation based on the classification output results.) identifying information on an accuracy degradation of the first ML model for a network system using the second ML model; (KINGETSU [0109] [...] teaches: "The detection unit 153 is a processing unit that detects accuracy degradation of the machine learning model 50 (i.e., accuracy degradation of the first ML model) by operating the inspector model 35 (i.e., using the second ML model)." KINGETSU [0119] teaches: "The detection unit 153 may output and display, onto the display unit 130, data identification information on the operation data set serving as a basis of detecting the accuracy degradation (i.e., identifying information on an accuracy degradation). Furthermore, the detection unit 153 may notify the training unit 151 of information indicating that accuracy degradation has been detected and retrain the machine learning model data 142 (i.e., of the first ML model).” Examiner's note: A network system can be interpreted as the information systems that are used by business enterprises, on which the machine learning model operates and performs its functions.) identifying, by the electronic device, that a predicted accuracy degradation corresponds to a pre-defined threshold based on the information on the accuracy degradation of the first ML model; (KINGETSU [0099] teaches: "The control unit 150 includes a training unit 151, the creating unit 152, a detection unit 153, and a prediction unit 154. The control unit 150 is able to be implemented by a central processing unit (CPU), a micro processing unit (MPU), or the like. Furthermore, the control unit 150 is also able to be implemented by hard wired logic, such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA)." KINGETSU [0113] teaches: "The detection unit 153 compares the first proportion to the second proportion, determines that concept drift occurs in the case where the second proportion is changed with respect to the first proportion, and then, detects accuracy degradation of the machine learning model 50. For example, the detection unit 153 determines that concept drift occurs in the case where the absolute value of the difference between the first proportion and the second proportion is larger than or equal to a threshold (i.e., a predicted accuracy degradation corresponds to a pre-defined threshold).") training the first ML model based on the identifying that the predicted accuracy degradation corresponds to the pre-defined threshold. (KINGETSU [0209] teaches: "If the difference between the first distance and the second distance is larger than or equal to the previously set threshold (i.e., corresponds to the pre-defined threshold), the computing system detects accuracy degradation of the machine learning model by recognizing that concept drift has occurred." KINGETSU [0119] teaches: "Furthermore, the detection unit 153 may notify the training unit 151 of information indicating that accuracy degradation has been detected and retrain the machine learning model data 142 (i.e., training the first ML model based on the identifying that the predicted accuracy). In this case, the training unit 151 retrains the machine learning model 50 (i.e., first ML model) by using a training data set that is newly designated.") Regarding Claim 2: KINGETSU teaches the elements of claim 1 as outlined above. KINGETSU further teaches: The method as claimed in claim 1, wherein the accuracy degradation is due to unplanned events occurring in the first ML model. (KINGETSU [0009] teaches: "In the distribution 1B, the tendency of the pieces of input data has been changed, so that, although all of the pieces of input data are distributed among normal model application areas, the distribution of the pieces of input data indicated by the star marks are changed in the direction of the model application area 3b." KINGETSU [0010] teaches: "In the distribution 1C, the tendency of the pieces of input data is further changed, some pieces of the input data indicated by the star marks move across the decision boundary 3 into the model application area 3b, and are not properly classified; therefore, a correct answer rate is decreased (i.e., the accuracy of the machine learning model is degraded)." KINGETSU [0013] teaches: "a change in the output result of the machine learning model caused by a temporal change in a tendency of the operation data." KINGETSU [0209] teaches: "If the difference between the first distance and the second distance is larger than or equal to the previously set threshold the computing system detects accuracy degradation of the machine learning model by recognizing that concept drift has occurred." Examiner's note: Under broadest reasonable interpretation, unplanned events occurring in the first ML model can be interpreted as a the tendency change in the operation data, which causes the machine learning model (i.e., first ML model) in improperly classify the data, resulting in concept drift which causes accuracy degradation over time (see Fig. 32).) Regarding Claim 3: KINGETSU teaches the elements of claim 1 as outlined above. KINGETSU further teaches: The method as claimed in claim 1, wherein the identifying the information on the accuracy degradation of the first ML model using the second ML model, comprises: receiving, by the electronic device, data regarding accuracy of the first ML model comprising at least one of: a model type, parameters and hyper parameters, network nodes, cell models, slice/cell configuration information, existing models that can be used for transfer learning, model training time, model prediction accuracies, resources used for model training, extraction times, time window of data extraction, data generation patterns, model accuracy data, and execution time for each training pipeline; (KINGETSU [0099] teaches: "The control unit 150 includes a training unit 151, the creating unit 152, a detection unit 153, and a prediction unit 154. The control unit 150 is able to be implemented by a central processing unit (CPU), a micro processing unit (MPU), or the like. Furthermore, the control unit 150 is also able to be implemented by hard wired logic, such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA)." KINGETSU [0010] teaches: "In the distribution 1C, the tendency of the pieces of input data is further changed, some pieces of the input data indicated by the star marks move across the decision boundary 3 into the model application area 3b, and are not properly classified; therefore, a correct answer rate is decreased (i.e., the accuracy of the machine learning model is degraded)." Examiner’s note: KINGETSU [0004] and [Fig. 32] teaches a machine learning model using decision boundaries to classify input data into three different application areas by classes denoted by stars, triangles, and circles. As time elapses from 1A to 1C, we observe that the input data denoted by triangles has changed, and thus the correct answer rate of the machine learning model decreases over time. Under broadest reasonable interpretation, the data regarding accuracy of the first ML model comprising at least one of: [...] model prediction accuracies, […] model accuracy data can be interpreted as the machine learning model's correct answer rate in each distribution 1A, 1B, and 1C.) storing, by the electronic device, the data regarding the accuracy of the first ML model to a database; (KINGETSU [0067] teaches: "[...] an output of the Teacher model 7A is referred to as a "soft target"." KINGETSU [0219] teaches: "The distillation data table 343 is a table that stores (i.e., storing [...] to a database) therein an output result (soft target) (i.e., the data regarding the accuracy) in the case where each of the pieces of data of a data set is input to the machine learning model 50 (i.e., of the first ML model).” KINGETSU [0088] teaches: “The storage unit 140 includes […] a distillation data table 143 […]. The storage unit 140 corresponds to a semiconductor memory device, such as a random access memory (RAM) or a flash memory, or a storage device, such as a hard disk drive (HDD).") identifying, by the electronic device, the information on the accuracy degradation of the first ML model based on analyzing the data regarding the accuracy of the first ML model with the second ML model. (KINGETSU [0010] teaches: "In the distribution 1C, the tendency of the pieces of input data is further changed, some pieces of the input data indicated by the star marks move across the decision boundary 3 into the model application area 3b, and are not properly classified; therefore, a correct answer rate is decreased (i.e., the accuracy of the machine learning model is degraded)." KINGETSU [0063] teaches: "In order to detect accuracy degradation of the machine learning model 10 with respect to operation data in accordance with elapsed time, a critical area 5a that includes the decision boundary 5 is monitored, and whether or not the number of pieces of operation data included in the critical area 5a is increased (or decreased), and, if the number of pieces of the operation data is increased (or decreased), accuracy degradation is detected." Examiner's note: the inspector model (i.e., second ML model) monitors the classification "soft target" outputs (i.e., analyzing the data regarding the accuracy) the machine learning model 10, which is the teacher model (i.e., first ML model), to detect accuracy degradation of the model.) Regarding Claim 11: The claim recites similar limitations as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. KINGETSU further teaches: a memory; at least one processor; and a proactive training engine, coupled to the memory and the at least one processor, the proactive training engine comprising circuitry, the proactive training engine configured to: (As illustrated in FIG. 10, a computing system 100 includes a communication unit 110, an input unit 120, a display unit 130, a storage unit 140, and a control unit 150.” KINGETSU [0099] teaches: "The control unit 150 includes a training unit 151, the creating unit 152, a detection unit 153, and a prediction unit 154. The control unit 150 is able to be implemented by a central processing unit (CPU) (i.e., at least one processor), a micro processing unit (MPU), or the like. Furthermore, the control unit 150 is also able to be implemented by hard wired logic, such as an application specific integrated circuit (ASIC) or a field programmable gate array (FPGA)." KINGETSU [0088] teaches: "[0088] The storage unit 140 includes teacher data 141, machine learning model data 142, a distillation data table 143, inspector model data 144, and an operation data table 145. The storage unit 140 corresponds to a semiconductor memory device (i.e., memory), such as a random access memory (RAM) or a flash memory, or a storage device, such as a hard disk drive (HDD)." Furthermore, KINGETSU [Claim 9] teaches: "A computing system comprising: one or more memories; and one or more processors coupled to the one or more memories (i.e., coupled to the memory and at least one processor), [...].”) Regarding Claim 12: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Regarding Claim 13: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over KINGETSU in view of VOLODARSKIY (US 20200175354 A1), KUMAR (US 20200004863 A1), and ANANTHANARAYANAN (US 20220188569 A1), hereafter VOLODARSKIY, KUMAR, and ANANTHANARAYANAN respectively. Regarding Claim 4: KINGETSU teaches the elements of claim 1 as outlined above. KINGETSU is not relied upon for teaching, but VOLODARSKIY teaches: identifying, by the electronic device, an expected time for completion of the model training […] based on data of accuracy stored in a database; (VOLODARSKIY [0070] teaches: "In step 424, estimation service 420 samples a batch of trials which have been completed, and estimates the time to train each model (i.e., identifying [...] an expected time for completion of the model training) represented by each trial. […] In either case, estimation service 420 may estimate an approximate training time for each model based on a train-time model that is designed to predict run times for training a model based on the trial statistics (i.e., based on data of accuracy) accumulated by trial-based optimization service 410 (e.g., including the execution times of the trials) for that model." VOLODARSKIY [0069] teaches: "The data in this case may comprise the trial statistics stored (e.g., in database(s) 114) by trial-based optimization service 410.") obtaining, by the electronic device, the data of accuracy of the first ML model from the database; (VOLODARSKIY [0065] teaches: "More specifically, steps 412 and 414 may be implemented by a trial-based optimization service 410 of selection module 113." VOLODARSKIY [0067] teaches: "In step 412, trial-based optimization service 410 begins executing the batched trials [...]. As the trials are executed, one or more statistics about each trial are obtained and stored, for example, in database(s) 114 (i.e., from a database). These statistics may include, for example, the execution time of each trial, the state(s) of each trial, and/or the results of each trial." VOLODARSKIY [0069] teaches: "The data in this case may comprise the trial statistics stored (e.g., in database(s) 114) by trial-based optimization service 410. In an embodiment, sufficient data may be determined to exist when a predetermined number of trials (e.g., one, two, five, ten, etc.) and/or models (i.e., first ML model) have been successfully executed." VOLODARSKIY [0070] teaches: "In step 424, estimation service 420 samples (i.e., obtaining) a batch of trials (i.e., data of accuracy) which have been completed, and estimates the time to train each model represented by each trial.") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU and VOLODARSKIY before them, to include VOLODARSKIY's estimation of time to train each model in KINGETSU's accuracy degradation detection method. One would have been motivated to make such a combination in order to select models one or more models with the fastest estimated training time for further evaluation and to optimally select which machine-learning models to train to provide the best accuracy within a given timeframe (VOLODARSKIY [0071-0072] and [0003]). KINGETSU in view of VOLODARSKIY is not relied upon for teaching, but KUMAR teaches: identifying […] an expected time for […] data extraction based on data of accuracy stored in a database; (KUMAR [0002] teaches: "The one or more processors may generate the set of forecasts of the ETL completion time (i.e., expected time for [...] data extraction) by using a data model to process the set of performance indicators and the set of recommendations that are capable of reducing the ETL completion time." KUMAR [0039] teaches: "The one or more performance indicators relating to data quality may include [...] a second performance indicator identifying degree of accuracy of the source data (e.g., by identifying a frequency at which particular errors occur, etc.) [...]." KUMAR [0020] teaches: "For example, example implementation 100 may include a first collection of data sources (shown as Data Source 1 through Data Source N), [...]." Examiner's note: ETL stands for Extract, Transform, and Load (ETL), which is typically done to prepare data for subsequent processes. KUMAR [0002] teaches forecasting ETL completion time by processing indicators such as the degree of accuracy of the source data stored in Data Source 1 through N (i.e., based on data of accuracy stored in a database).) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU, VOLODARSKIY, and KUMAR before them, to include KUMARs ETL job forecasting based on degree of accuracy in KINGETSU and VOLODARSKIY's accuracy degradation detection method. One would have been motivated to make such a combination in order to identify when preventative actions need to be taken to ensure that the group of ETL jobs execute within a requested ETL completion time (KUMAR [0019]). KINGETSU in view of VOLODARSKIY, and KUMAR is not relied upon for teaching, but ANANTHANARAYANAN teaches: identifying, by the electronic device, incoming requests of the first ML model; (ANANTHANARAYANAN [0063] teaches: "System 500 (i.e., electronic device) comprises an edge server 502 on which a thief scheduler 504 and a micro-profiler 506 are executed by one or more processors and/or other logic components (e.g. GPUs, field programmable gate arrays (FPGSAs), etc.) via instructions stored on the edge server 502." ANANTHANARAYANAN [0122] teaches: "With reference to FIG. 12A, at 1202, the method 1200 comprises receiving a video stream." ANANTHANARAYANAN [0004] teaches: "Another example provides, on a computing device comprising a machine learning model configured to analyze video data, a method for allocating computing resources and selecting hyperparameter configurations during continuous retraining and operation of the machine learning model, the continuous retraining and operation comprising a plurality of jobs including, for each video stream of one or more video streams, an inference job and a retraining job." ANANTHANARAYANAN [0038] teaches: "In each retraining window, the resource scheduler makes the decisions (i.e., identify) described above to [...] (2) allocate the edge server's GPU resources among the retraining and inference jobs, and (3) select the configurations of the retraining and inference jobs." Examiner's note: Under broadest reasonable interpretation, the identifying [...] incoming requests of the first ML model can be interpreted as the received video streams that are used in a continuous operation of the machine learning model for inference and retraining.) identifying, by the electronic device, resources and resource constraints; (ANANTHANARAYANAN [0038] teaches: "In each retraining window, the resource scheduler makes the decisions (i.e., identify) described above to [...] (2) allocate the edge server's GPU resources among the retraining and inference jobs, and (3) select the configurations of the retraining and inference jobs.” ANANTHANARAYANAN [0067] teaches: "In some examples, the optimization algorithm aims to maximize inference accuracy averaged across all videos in a retraining window within a GPU's resource limit (i.e., resource constraints) [...]." ANANTHANARAYANAN [0063] teaches the electronic device that implements the functions above.) creating, by the electronic device, a plan for training the first ML model based on the identified incoming requests, the expected time for completion of the training, the identified resources, and the resource constraints; (ANANTHANARAYANAN [0003] teaches: "Examples are disclosed that relate to allocating computing resources and selecting hyperparameter configurations during continuous retraining and operation of a machine learning model, the continuous retraining and operation comprising a plurality of jobs (i.e., identified incoming requests) including, for each video stream of one or more video streams, an inference job and a retraining job [...]." ANANTHANARAYANAN [0038] teaches: "In each retraining window, the resource scheduler makes the decisions (i.e., creating [...] a plan [...] based on) described above to (1) decide which of the edge models to retrain (i.e., for training the first ML model); (2) allocate the edge server's GPU resources (i.e., the identified resources) among the retraining and inference jobs, and (3) select the configurations of the retraining and inference jobs.” ANANTHANARAYANAN [0075] teaches: "A micro-profiler (an example of which is described below) provides the estimate of the accuracy and the time to retrain for a retraining configuration (i.e., expected time for completion of the training) when 100% of GPU is allocated, and EstimateAccuracy proportionately scales the GPU-time for the current allocation (in temp_alloc[ ]) and training data size. In doing so, it may avoid configurations whose retraining durations exceed ∥T∥ with the current allocation (e.g., expression (2))." ANANTHANARAYANAN [0042] teaches: "Due to cost and energy constraints (i.e., the resource constraints), compute efficiency can be a primary design goals of edge computing.") triggering, by the electronic device, training of the first ML model based on the created plan. (ANANTHANARAYANAN [0063] teaches: "The edge server 502 executes the retraining jobs 510 and the inference jobs 512 based upon scheduling determined by the thief scheduler 504, and observes inference accuracies from retraining." Examiner's note: Under broadest reasonable interpretation, the triggering, by the electronic device, training of the first ML model based on the created plan can be interpreted as the edge server executing the retraining jobs based on the scheduling determined by the scheduler.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU, VOLODARSKIY, and ANANTHANARAYANAN before them, to include ANANTHANARAYANAN's scheduler for making decisions about resource allocation and retraining configurations in KINGETSU and VOLODARSKIY's accuracy degradation detection method. One would have been motivated to make such a combination in order to maximize overall inference accuracy for all input data over a given retraining window (ANANTHANARAYANAN [0065]). Regarding Claim 14: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over KINGETSU in view of SEETHARAMAN (US 20210075678 A1), WANG (US 20220353803 A1), MAI ("Transfer Reinforcement Learning Aided Distributed Network Slicing Optimization in Industrial IoT"), and WATSON (US 20200320379 A1), hereafter SEETHARAMAN, WANG, MAI and WATSON respectively. Regarding Claim 5: KINGETSU teaches the elements of claim 1 as outlined above. KINGETSU is not relied upon for teaching, but SEETHARAMAN teaches: receiving, by the electronic device, a request to configure a […] service with a new network slice; (SEETHARAMAN [0030] teaches: "The template controller interface module 202 may receive (i.e., receiving) a Network Slice Template Request message (NS-TMPLT-RQST) (i.e., a request to configure a [...] service) from the end-to-end orchestrator 138. Thereafter, the template controller interface module 202 may extract relevant information from the NS-TMPLT-RQST. In an embodiment, the NS-TMPLT-RQST may include, but is not limited to service type(s) or categories to be supported (SERV-TYPE), one or more Service Level Agreements (SLAs) or one or more Key Performance Indicators (KPI) or performance requirements (TARGET-SLA-KPI), Capacity (CAP), User density (USER-DEN), isolation and sharing levels (ISO-SHARING), mobility requirements (MOB-RQMT), and Security, Policy requirements/constraints (SEC-POL-RQMTS). The NS-TMPLT-RQST may also include additional characteristics, such as, cost of operation, priority and pre-emption, reliability (ADDTNL-CHAR)." SEETHARAMAN [0035] teaches: "Further, the network slice template provider module 206 determines appropriate one or more network slice templates and one or more network slice sub-net templates. The network slice template provider module 206 creates or forms one or more new network slice templates (i.e., with a new network slice) as well as one or more new network slice sub-net templates." SEETHARAMAN [0010] teaches: "The system includes at least one processor (i.e., by the electronic device) and a memory communicatively coupled to the processor. The memory stores processor instructions, which, on execution, causes the processor to extract a plurality of parameters from a template data within a template request message. The processor instructions further cause the processor to determine at least one network slice template from a plurality of templates, based on comparison of the plurality of parameters with parameters associated with the plurality of templates.") identifying, by the electronic device, a network slice […] similar to the new network slice […] (SEETHARAMAN [0071] teaches: "For each of the NSLTs that are found to be suitable (i.e., identifying [...] a network slice), the network slice template provider module 206 consolidates the list of matching parameters (i.e., similar to the new network slice), not matching parameters (that require a modification to the NSLT), and the corresponding implications, if any." SEETHARAMAN [0010] teaches the processor (i.e., by the electronic device) for implementing the functions of finding a suitable NSLT (i.e., Network Slice Template).) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU and SEETHARAMAN before them, to include SEETHARAMAN’s receiving a Network Slice Template Request Message (NS-TMPLT-RQST) for determining or creating appropriate network slice templates to fulfill the requirements of the received message (e.g., NS-TMPLT-RQST). One would have been motivated to make such a combination in order to have better matching parameters since the reliability of the NSLT (i.e., Network Slice Template) can be greater than what is requested in the NS-TMPLT-RQST (Network Slice Template Request) (SEETHARAMAN [0065]). KINGETSU in view of SEETHARAMAN is not relied upon for teaching, but WANG teaches: […] a request to configure a ML service with a […] network slice; (WANG [0003] teaches: "The method also includes transmitting, to a network-slice manager of a wireless network, a first machine-learning architecture request message to request permission to use the first machine-learning architecture (i.e., a request to configure a ML service). The method additionally includes receiving, from the network-slice manager, a first machine-learning architecture response message that grants permission to use the first machine-learning architecture based on a first network slice. The method further includes wirelessly communicating data for the first application using the first machine-learning architecture." WANG [0058] teaches: “[…] the network-slice manager 190 associates each machine-learning architecture 210, 220, and 230 with one or more network slices, as further described with respect to FIG. 4.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU, SEETHARAMAN, and WANG before them, to include WANG’s request permission to use machine-learning architectures in KINGETSU and SEETHARAMAN’s accuracy degradation detection method. One would have been motivated to make such a combination in order to determine an end-to-end machine-learning architecture that meets the requested quality-of-service level (WANG [0125]). KINGETSU in view of SEETHARAMAN and WANG is not relied upon for teaching, but MAI teaches: […] a network slice with the first ML model […] and capable for transfer learning; (MAI [pg. 4309, section 1. Introduction] teaches: "In this article, we design a network slicing architecture over SDN-based LoRaWAN, where the SDN controller can dynamically partition LoRa gateways’ resources (e.g., physical channel) into several virtual networks (i.e., a network slice) on the fly. [...] The LoRa gateways using the DDPG are able to improve the performance by exploring the environment and learning directly from their experiences. [...] each slice agent (i.e., a network slice with the first ML model) on each LoRa gateway has to learn from scratch (i.e., randomized policy), thereby resulting in a long learning time to reach the system’s optimal performance. To accelerate the training process, we introduce the transfer learning framework [7] (i.e., and capable of transfer learning). Transfer learning is a machine learning technique where experience gained acquired from one task can be transferred to other related tasks.") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU, SEETHARAMAN, WANG, and MAI before them, to include MAI’s transfer learning framework in KINGETSU, SEETHARAMAN, and WANG’s accuracy degradation detection method. One would have been motivated to make such a combination in order to accelerate each LoRa gateway slice agent reaching optimal performance (MAI [pg. 4309, section 1. Introduction]). KINGETSU in view of SEETHARAMAN, WANG , and MAI is not relied upon for teaching, but WATSON teaches: identifying, by the electronic device, super models of the first ML model used for transfer learning, and remaining layers of the first ML model to be trained based on inputs from a model registry; (WATSON [0019] teaches: "In one embodiment, a system and/or method can include estimating or determining an impact of training a new transfer model (i.e., first ML model), based on a sample or a set of existing transfer model (i.e., super models of the first ML model)." WATSON [0030] teaches: "In one or more embodiments, the identification component 114 can identify one or more pre-trained neural network models from the library of models 122 to serve as one or more transfer models (i.e., super models [...] used for transfer learning) based on the similarity metrics and a similarity threshold." WATSON [0023] teaches: "A model library may be a database storing existing or prior-trained models. The data set can be used as input (i.e., inputs from a model registry) to a model in the model library, and the model may be run based on the data set." WATSON [0024] teaches: "[0024] At 106, an estimate of similarity between the data set (e.g., received at 102) to a target layer of the model (i.e., remaining layers of the first ML model) is received. A target layer can be a layer of artificial neurons within a neural network such as a deep-learning model. [...]In one embodiment, a method that measures the similarity between data sets can be used to identify which layers in a deep learning network or elements in an ensemble network are most similar to the data set." WATSON [0040] teaches: "In embodiments, as described above, a system and/or method can predict which models and datasets are valuable for training a transfer model. In one embodiment, the method produces a set of transfer models by recombining existing transfer models. In one embodiment, the method may include organizing pre-trained models into clusters (e.g., create clusters of related transfer models), and using those clusters to predict the optimal set of data upon which to train new transfer models (i.e., to be trained based on inputs from a model registry)." WATSON [0015] teaches: "In one aspect, a new model may be learned, which for example, may fill a gap existing in a prior-trained model or prior trained models. For instance, a gap may occur if a prior-trained model's data used to train that prior-trained mode is relatively small. As another example, the data used to train a prior-trained model may be distant from a desired model. In one aspect, a new model may evolve a prior-trained model as new data is acquired. Such a new model can be used as a base model for transfer learning." WATSON [0016] "A new model can be trained to meet a desired requirement, for example, data size, and/or transfer value optimization, and/or another.") triggering, by the electronic device, training of the remaining layers of the first ML model. (WATSON [0024] teaches: "At 106, an estimate of similarity between the data set (e.g., received at 102) to a target layer of the model (i.e., remaining layers of the first ML model) is received. A target layer can be a layer of artificial neurons within a neural network such as a deep-learning model. [...] In one embodiment, a method that measures the similarity between data sets can be used to identify which layers in a deep learning network or elements in an ensemble network are most similar to the data set." WATSON [0040] teaches: "In one embodiment, the method may include organizing pre-trained models into clusters (e.g., create clusters of related transfer models), and using those clusters to predict the optimal set of data upon which to train new transfer models." WATSON [0053] teaches: "At 504, at least based on the similarity estimates, whether to train a new neural network model is determined." WATSON [0054] teaches: "At 506, responsive to determining to train the new neural network model, a cluster can be created among the plurality of prior-trained neural network models. A cluster can be created based on feature vectors produced by passing, in forward propagation, the sample data through the plurality of prior-trained neural network models." Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU, SEETHARAMAN, WANG, MAI, and WATSON before them, to include WATSON’s similarity estimation between a data set and target layer to determine training of a new neural network in KINGETSU, SEETHARAMAN, WANG, and MAI’s accuracy degradation detection method. One would have been motivated to make such a combination in order to improve training accuracy and leverage smaller data sets (WATSON [0041]). Regarding Claim 6: KINGETSU in view of SEETHARAMAN, WANG, MAI, and WATSON teaches the elements of claim 5 as outlined above. SEETHARAMAN further teaches: comparing configuration parameters of a new network slice and configuration parameters of a plurality of network slices based on the threshold configurations; (SEETHARAMAN [0064] teaches: "At step 416, parameters associated with each of the set of templates (i.e., configuration parameters of a plurality of network slices) is compared (i.e., comparing) with a set of parameters within the plurality of parameters associated with the NS-TMPLT-RQST (i.e., and configuration parameters of a new network slice). In an embodiment, the set of parameters are the parameters that remain after the one or more parameters used for comparison at the step 402 have been removed. Based on the comparison, a parameter matching score is computed for each of the set of templates at step 418. At step 420, a subset is identified from the set of templates. The parameter matching score for each template in the subset is above a second threshold score." SEETHARAMAN [0065] teaches: "Based on cumulative performance score and parameter matching score, a suitability score (i.e., based on the threshold configuration) may be computed for each template in the subset, at step 422. At step 424, the one or more NSLTs are identified from the subset. The suitability score for each of the one or more NSLTs is greater than a suitability threshold score." SEETHARAMAN [0070] teaches: "In another scenario, the requirement in the NS-TMPLT-RQST for the parameter (for example, security level) is more than a match for the corresponding parameter in an ADAPTED-NSLT. In this case, there may be two possibilities. First, the better match is good to have, for example, reliability supported in the NSLT is greater than what is requested in the NS-TMPLT-RQST." SEETHARAMAN [0071] teaches: "For each of the NSLTs that are found to be suitable, the network slice template provider module 206 consolidates the list of matching parameters, not matching parameters (that require a modification to the NSLT), and the corresponding implications, if any. The network slice template provider module 206 may also consolidate the details of superior matching parameters (i.e., more than what is required) without any modifications.”) identifying, based on the comparing configuration parameters of the new network slice and configuration parameters of the plurality of network slices, a network slice with the first ML model similar to the new network slice […] (SEETHARAMAN [0065] teaches: "Based on cumulative performance score and parameter matching score, a suitability score may be computed for each template in the subset (i.e., based on the comparing configuration parameters of the new network slice and configuration parameters of the plurality of network slices), at step 422. At step 424, the one or more NSLTs are identified (i.e., identifying [...] a network slice) from the subset. The suitability score for each of the one or more NSLTs is greater than a suitability threshold score." SEETHARAMAN [0071] teaches: "For each of the NSLTs that are found to be suitable (i.e., identifying [...] a network slice), the network slice template provider module 206 consolidates the list of matching parameters (i.e., similar to the new network slice), not matching parameters (that require a modification to the NSLT), and the corresponding implications, if any.") MAI further teaches: a network slice with the first ML model […] and capable for transfer learning. (MAI [pg. 4309, section 1. Introduction] teaches: "In this article, we design a network slicing architecture over SDN-based LoRaWAN, where the SDN controller can dynamically partition LoRa gateways’ resources (e.g., physical channel) into several virtual networks (i.e., a network slice) on the fly. [...] The LoRa gateways using the DDPG are able to improve the performance by exploring the environment and learning directly from their experiences. [...] each slice agent (i.e., a network slice with the first ML model) on each LoRa gateway has to learn from scratch (i.e., randomized policy), thereby resulting in a long learning time to reach the system’s optimal performance. To accelerate the training process, we introduce the transfer learning framework [7] (i.e., and capable of transfer learning). Transfer learning is a machine learning technique where experience gained acquired from one task can be transferred to other related tasks.") Regarding Claim 15: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding Claim 16: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over KINGETSU in view of VOLODARSKIY and KUMAR. Regarding Claim 7: KINGETSU teaches the elements of claim 1 as outlined above. KINGETSU further teaches: […] information on the accuracy degradation of the first ML model; (KINGETSU [0119] teaches: "The detection unit 153 may output and display, onto the display unit 130, data identification information (i.e., information) on the operation data set serving as a basis of detecting the accuracy degradation (i.e., on accuracy degradation). Furthermore, the detection unit 153 may notify the training unit 151 of information indicating that accuracy degradation has been detected and retrain the machine learning model data 142. In this case, the training unit 151 retrains the machine learning model 50 (i.e., of the machine learning model) by using a training data set that is newly designated.") KINGETSU is not relied upon for teaching, but VOLODARSKIY teaches: obtaining, from a database, the information on the accuracy […] of the first ML model; (VOLODARSKIY [0065] teaches: "More specifically, steps 412 and 414 may be implemented by a trial-based optimization service 410 of selection module 113." VOLODARSKIY [0067] teaches: "In step 412, trial-based optimization service 410 begins executing the batched trials [...]. As the trials are executed, one or more statistics about each trial are obtained and stored, for example, in database(s) 114. These statistics may include, for example, the execution time of each trial, the state(s) of each trial, and/or the results of each trial." VOLODARSKIY [0069] teaches: "The data in this case may comprise the trial statistics stored (e.g., in database(s) (i.e., from a database) 114) by trial-based optimization service 410. In an embodiment, sufficient data may be determined to exist when a predetermined number of trials (e.g., one, two, five, ten, etc.) and/or models (i.e., first ML model) have been successfully executed." VOLODARSKIY [0070] teaches: "In step 424, estimation service 420 samples (i.e., obtaining) a batch of trials (i.e., data of accuracy) which have been completed, and estimates the time to train each model represented by each trial." Examiner's note: VOLODARSKIY [0003] discloses embodiments directed to automated machine learning to optimally select which machine learning models to train to provide the best accuracy within a given timeframe. KINGETSU discloses a method for accuracy detection degradation to retrain models when the accuracy degrades. A person having ordinary skill in the art could use VOLODARSKIY's database(s) that stores statistics of trials (e.g., the execution time of each trial, the state(s) of each trial, and/or the results of each trial) to store KINGETSU's detection unit 153 output, such as the data identification information on the operation data set serving as a basis of detecting the accuracy degradation in order to optimally select which model to train to provide the best accuracy (KINGETSU [0003]). identifying first time interval to training the first ML model […]; (VOLODARSKIY [0070] teaches: "In step 424, estimation service 420 samples a batch of trials which have been completed, and estimates the time to train each model (i.e., identifying first time interval to training the first ML model) represented by each trial. […] In either case, estimation service 420 may estimate an approximate training time for each model based on a train-time model that is designed to predict run times for training a model based on the trial statistics accumulated by trial-based optimization service 410 (e.g., including the execution times of the trials) for that model.") identifying, based on the first time interval […], a third time interval for training the first ML model; (VOLODARSKIY [0049] teaches: "It may involve estimating a training time (i.e., based on the first time interval) and accuracy for two or more models represented in the batch of trials, and then selecting the best algorithm and hyperparameter settings to train (i.e., for training the first ML model) from an available set of algorithm/hyperparameter setting combinations based on a time constraint (i.e., identifying [...] a third time interval) set by the user." Examiner's note: Under broadest reasonable interpretation, a third time interval for training the first ML model can be interpreted as the actual time window in which VOLODARSKIY trains the model after estimating a training time, accuracy, algorithm, and hyperparameter settings to train the model.) during the third time interval, training the first ML model. (VOLODARSKIY [0049] teaches: "It may involve estimating a training time (i.e., based on the first time interval) and accuracy for two or more models represented in the batch of trials, and then selecting the best algorithm and hyperparameter settings to train (i.e., for training the first ML model) from an available set of algorithm/hyperparameter setting combinations based on a time constraint (i.e., identifying [...] a third time interval) set by the user.") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU and VOLODARSKIY before them, to include VOLODARSKIY' estimation of time to train each model in KINGETSU's accuracy degradation detection method. One would have been motivated to make such a combination in order to select models one or more models with the fastest estimated training time for further evaluation and to optimally select which machine-learning models to train to provide the best accuracy within a given timeframe (VOLODARSKIY [0071-0072] and [0003]). KINGETSU in view of VOLODARSKIY is not relied upon for teaching, but KUMAR teaches: identifying […] second time interval to obtain data for training the first ML model; (KUMAR [0046] teaches: "As shown in FIG. 1B, the ETL management platform may train a data model (i.e., for training the first ML model) using historical data, which may include a set of historical performance indicators, historical ETL completion time data (i.e., identifying [...] second time interval to obtain data), historical network modifications data, and/or the like." KUMAR [0048] teaches: "As shown by reference number 115, the ETL management platform may use the historical data to train a data model. The data model may be a Bayesian Network, a neural network, a Gaussian Mixture Model (GMM), and/or another type of predictive machine learning model." identifying, […] the second time interval […] for training the first ML model; (KUMAR [0046] teaches: "As shown in FIG. 1B, the ETL management platform may train a data model (i.e., for training the first ML model) using historical data, which may include a set of historical performance indicators, historical ETL completion time data (i.e., identifying [...] second time interval to obtain data), historical network modifications data, and/or the like.") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU, VOLODARSKIY, and KUMAR before them, to include KUMARs ETL completion time data in KINGETSU and VOLODARSKIY's accuracy degradation detection method. One would have been motivated to make such a combination in order to identify when preventative actions need to be taken to ensure that the group of ETL jobs execute within a requested ETL completion time (KUMAR [0019]). Regarding Claim 17: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale. Claims 8-9 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over KINGETSU in view of ANANTHANARAYANAN. Regarding Claim 8: KINGETSU teaches the elements of claim 1 as outlined above. KINGETSU is not relied upon for teaching, but ANANTHANARAYANAN teaches: scaling up a resource for training the first ML model; and training, based on the resource scaled up, the first ML model. (ANANTHANARAYANAN [0039] teaches: "To estimate the resource demands, the micro-profiler measures the retraining duration per epoch when 100% of the GPU is allocated, and scales out the training time for different allocations, numbers of epochs, and training data sizes." ANANTHANARAYANAN [0083] teaches: "Hence, the GPU-time taken to retrain for each epoch in the current retraining window may be measured when 100% of the GPU is allocated to the retraining. This may allow the time to be scaled for a varying number of epochs, GPU allocations, and training data sizes in Algorithm 1." ANANTHANARAYANAN [0091] teaches: "GPU resources may be reallocated between training and inference jobs at timescales that are far more dynamic than other frameworks where the GPU allocations for jobs may be fixed upfront." ANANTHANARAYANAN [0094] teaches: "When the accuracy during the retraining varies from the expected value from micro-profiling, resource allocations may be adjusted reactively. Every few epochs (e.g., every 5 epochs), the current accuracy of the model being retrained is used to estimate its eventual accuracy when all the epochs are complete. The expected accuracy is updated in the profile of the retraining (Γ) with the new value, and then Algorithm 1 is run again for new resource allocations (but leaves the configuration that is used currently, γ, to be unchanged).") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU and ANANTHANARAYANAN before them, to include ANANTHANARAYANAN's scheduler for making decisions about resource allocation and retraining configurations in KINGETSU's accuracy degradation detection method. One would have been motivated to make such a combination in order to maximize overall inference accuracy for all input data over a given retraining window (ANANTHANARAYANAN [0065]). Regarding Claim 9: KINGETSU teaches the elements of claim 1 as outlined above. KINGETSU further teaches: identifying that traffic pattern is changed; (KINGETSU [0113] teaches: "[0113] The detection unit 153 compares the first proportion to the second proportion, determines (i.e., identifying) that concept drift occurs (i.e., that traffic pattern is changed) in the case where the second proportion is changed with respect to the first proportion, and then, detects accuracy degradation of the machine learning model 50.") KINGETSU is not relied upon for teaching, but ANANTHANARAYANAN teaches: identifying that information on key performance data (KPI) is to be changed based on the identifying that traffic pattern is changed. (ANANTHANARAYANAN [0032] teaches: "Continuous learning is one approach to addressing data drift." ANANTHANARAYANAN [0036] teaches: "First, the decision space is multi-dimensional, and comprises a diverse set of retraining and inference configurations, and choices of resource allocations over time. Second, it is difficult to know the performance of different configurations (in resource usage and accuracy, for example) without actually retraining using different configurations. Data drift may exacerbate these challenges because a decision that works well in a retraining window may not do so in the future." ANANTHANARAYANAN [0038] teaches: "In each retraining window, the resource scheduler makes the decisions described above to (1) decide which of the edge models to retrain; (2) allocate the edge server's GPU resources among the retraining and inference jobs, and (3) select the configurations of the retraining and inference jobs. In these decisions, the scheduler prioritizes retraining models of those video streams whose characteristics have changed the most, as these models may be most affected by data drift (i.e., based on the identifying that traffic pattern is changed). The scheduler decides against retraining the models which do not improve a target metric (i.e., identifying that information on key performance data (KPI) is to be changed).") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU and ANANTHANARAYANAN before them, to include ANANTHANARAYANAN's scheduler for making decisions about resource allocation and retraining configurations in KINGETSU's accuracy degradation detection method. One would have been motivated to make such a combination in order to maximize overall inference accuracy for all input data over a given retraining window (ANANTHANARAYANAN [0065]). Regarding Claim 18: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 8 and is rejected for similar reasons as claim 8 using similar teachings and rationale. Regarding Claim 19: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 9 and is rejected for similar reasons as claim 9 using similar teachings and rationale. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over KINGETSU in view of MAI and ANANTHANARAYANAN. Regarding Claim 10: KINGETSU teaches the elements of claim 1 as outlined above. KINGETSU is not relied upon for teaching, but MAI teaches: identifying at least one configuration parameter of a network slice related to the first ML model; (MAI [pg. 4309, section I. Introduction] teaches: "The gateway should be able to configure slice parameters [e.g., bandwidth (BW), spreading factor (SF), transmission power (TP)] (i.e., identifying at least one configuration parameter of a network slice) to satisfy the distinct QoS [5]. To address this issue, we propose a deep deterministic policy gradient (DDPG) based slice resource optimization algorithm [6] (i.e., related to the first ML model)." MAI [pg. 4315, section VI. Conclusion] teaches: "In addition, considering the limited number of available channels on each LoRa gateway, slices may suffer from performance degradation and resource starvation. To tackle this problem, we proposed a DDPG-based slice optimization algorithm to search the optimal SF and TP parameters configurations.") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU and MAI before them, to include MAI’s determining network slice parameter configurations to satisfy a quality-of-service (QoS) in KINGETSU’s accuracy degradation detection method. One would have been motivated to make such a combination in order to accelerate each LoRa gateway slice agent reaching optimal performance (MAI [pg. 4309, section 1. Introduction]). KINGETSU in view of MAI is not relied upon for teaching, but ANANTHANARAYANAN teaches: training, based on the at least one configuration parameter is changed, the first ML model. (ANANTHANARAYANAN [0003] teaches: "Based upon the extrapolated inference accuracies determined for the superset of hyperparameter configurations, a set of selected hyperparameter configurations is output comprising a plurality of hyperparameter configurations for possible use in retaining the machine learning model." ANANTHANARAYANAN [0039] teaches: "A micro-profiler estimates the benefits and costs of retraining edge machine learning models using various hyperparameter configurations, and selects a set of hyperparameter configurations for possible use in retraining." ANANTHANARAYANAN [0139] teaches: "At 1420, the method 1400 includes retraining and operating the machine learning model using the one or more of the hyperparameter configuration and the computing resource allocation selected." ANANTHANARAYANAN [0060] teaches: "As mentioned above, to perform continuous training, edge computing devices may smartly decide when to retrain each video stream's model, how much resources to allocate, and what configurations to use." Examiner's note: Under broadest reasonable interpretation, based on the at least one configuration parameter is changed can be interpreted as the set of hyperparameter configurations to be used in retraining (e.g., changing hyperparameter configurations in retraining (i.e., training) the machine learning model (i.e., the first ML model).) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of KINGETSU, MAI, and ANANTHANARAYANAN before them, to include ANANTHANARAYANAN's scheduler for making decisions about resource allocation and retraining configurations in KINGETSU and MAI's accuracy degradation detection method. One would have been motivated to make such a combination in order to maximize overall inference accuracy for all input data over a given retraining window (ANANTHANARAYANAN [0065]). Regarding Claim 20: KINGETSU teaches the elements of claim 11 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 10 and is rejected for similar reasons as claim 10 using similar teachings and rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: SEETHARAMAN (US 20220021590 A1) relates to service requests for determining suitable network slices. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alvaro S Laham Bauzo whose telephone number is (571)272-5650. The examiner can normally be reached Mon-Fri 7:30 AM - 11:00 AM | 1:00 PM - 5:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached on (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S.L./Examiner, Art Unit 2146 /USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Apr 24, 2023
Application Filed
Feb 04, 2026
Non-Final Rejection — §101, §102, §103
Apr 03, 2026
Examiner Interview Summary
Apr 03, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12475388
MACHINE LEARNING MODEL SEARCH METHOD, RELATED APPARATUS, AND DEVICE
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+100.0%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month