Prosecution Insights
Last updated: April 19, 2026
Application No. 17/863,576

METHOD AND ELECTRONIC DEVICE FOR MANAGING MACHINE LEARNING SERVICES IN WIRELESS COMMUNICATION NETWORK

Non-Final OA §102§103§112
Filed
Jul 13, 2022
Examiner
RASHID, ISHRAT
Art Unit
2459
Tech Center
2400 — Computer Networks
Assignee
Samsung Electronics Co., Ltd.
OA Round
5 (Non-Final)
58%
Grant Probability
Moderate
5-6
OA Rounds
3y 2m
To Grant
78%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
115 granted / 198 resolved
At TC average
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
22 currently pending
Career history
220
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 198 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 17 December, 2025 has been entered. This communication is in response to the remarks and amendments filed on 17 December, 2025. Claims 1, 3-10 and 12-21 are pending. Claims 1, 5-6, 10, 15 and 19-20 are amended. Response to Arguments 35 USC § 103 Regarding amended claim 1, Applicant argues that the combination of Li-Sachdeva-Bellamkonda does not explicitly teach the currently amended claim limitations. Examiner finds the argument persuasive and a new ground of rejection is presented herewith. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 3-10 and 13-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “wherein each of the deployed at least one ML package includes a predicted requirement of network resources for implementing a ML technique, a predicted ML model, an error prediction window for the predicted ML model, a periodicity of predicting an error for the predicted ML model, and a training accuracy for the predicted ML model”. However, Applicant’s Specification does not provide support for such a scope. For example, Applicant’s Specification at [0009] provides: In an embodiment, each of the plurality of ML packages comprises at least one of: a predicted requirement of the network resources for implementing a ML technique, predicted optimal ML model and related libraries, an error prediction window, periodicity of predicting the error, at least one of: a training accuracy and a prediction accuracy. For purposes of further examination, Examiner will construe at least one of the listed factors to provide for said limitation. Respective dependent claims do not cure the deficiency of the parent claim(s) and therefore, inherit the rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 6, 10, 12-13, 15, and 19 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Li et al (US 2022/0038349), in view of Sachdeva et al (US 2024/0096155), in view of Bellamkonda et al (US 11,012,872), further in view of Kuo et al (US 2019/0156246). Regarding claim 1, Li (Abstract) teaches a method for managing machine learning (ML) services by an electronic device in a wireless communication network, the method comprising: storing a plurality of ML packages (Li Abstract, wherein selecting an AI/ML model provides for having stored a plurality of AI/ML packages; claim 1); receiving a trigger based on at least one network service request from a server (Li claim 12); determining a plurality of parameters corresponding to the at least one network service request, in response to receiving the trigger from the server (Li claim 13); determining at least one ML package from the plurality of ML packages stored in a memory based on the trigger and the plurality of parameters corresponding to the at least one network service request (Li claim 10); and deploying the determined at least one ML package for executing the at least one network service request (Li fig.3B). Li teaches the above including storing a plurality of AI/ML packages, but Li does not explicitly teach receiving a trigger indicating an anomaly corresponding to a newly formed network slice; that the plurality of AI/ML packages are stored in memory of the electronic device. However, in a similar field of endeavor, Sachdeva teaches that a plurality of AI/ML packages stored in the memory of an electronic device (Sachdeva [0085] provides “The first, second and/or third ML models can be all stored on the same device (e.g., on the client device 120, on the access control device 110, or centrally on the authorization management system 140). In some cases, one of the first, second and/or third ML models can be implemented by one device (e.g., on the client device 120, on the access control device 110, or centrally on the authorization management system 140) while another one of the first, second and third ML models is implemented by a different device (e.g., on the client device 120, on the access control device 110, or centrally on the authorization management system 140)”. One of ordinary skill in the art before the effective filing date of Applicant’s claimed invention would be motivated to use the feature of the adaptability of how AI/ML packages can be stored within a single device as taught by Sachdeva or centrally as taught by Li to provide flexibility in the use of such AI/ML models for various data sets and in various different scenarios. Li-Sachdeva teaches triggers for network service requests but Li-Sachdeva does not explicitly teach the trigger indicating an anomaly corresponding to a newly formed network slice. However, in a similar field of endeavor, Bellamkonda teaches the trigger indicating an anomaly corresponding to a newly formed network slice (Bellamkonda figs.4A-4D and col.18 line 52-col.19 line 19 provides “…orchestrator device 130 may receive various inputs, such as core inputs 403, RAN inputs 405, slice/service scope inputs 407, ML model and prediction inputs 410, and network slice generation trigger inputs 413. An input may be of a per slice basis. An input may be of a network device of relevance, of a tier of the network and associated time granularity, and so forth. As an example, core inputs 403 may include information pertaining to congestion state, route congestion, flow state and/or statistics, network topology and/or outages; RAN inputs 405 may include information pertaining to context, optimization state, and performance; slice/service scope inputs 407 may include information pertaining to a bit rate (e.g., guaranteed bit rate (GBR), maximum bit rate (MBR), non-GBR, aggregate MBR (AMBR, etc.), latency, reliability, throughput, and/or other types of QoS, KPIs); ML model and prediction inputs 410 may include information pertaining to a network slice, policies, and/or time granularities, a trained machine learning model of relevance to the network slice and/or application service, anomaly detection; and network slice generation trigger inputs 413 may include information pertaining to network slice constraints for new or existing network slices (e.g., latency, reliability, throughput, and/or other QoS, KPIs; policies, threshold values, and/or other types of configurations), and information pertaining to network slice generational inputs (e.g., end device requests for a network slice, network-based requests for a network slice, etc.). As further illustrated, AI/ML, framework 110 may include slice/service scope inputs 407 (and potential other inputs illustrated in FIG. 4A), which may be used as a basis to provide ML model and prediction inputs 410 to orchestrator device 130”, wherein an anomaly can be reasonably interpreted to be anything such as latency, throughput etc. being outside of a normal range). One of ordinary skill in the art before the effective filing date of Applicant’s claimed invention would be motivated to implement the feature of SLA based triggers as taught by Bellamkonda in the system taught by Li-Sachdeva, to ensure undisrupted quality of service. Li-Sachdeva-Bellamkonda teaches the above but does not explicitly teach wherein each of the deployed at least one ML package includes a predicted requirement of network resources for implementing a ML technique, a predicted ML model, an error prediction window for the predicted ML model, a periodicity of predicting an error for the predicted ML model, and a training accuracy for the predicted ML model. However, in a similar field of endeavor, Kuo teaches wherein each of the deployed at least one ML package includes a predicted requirement of network resources for implementing a ML technique, a predicted ML model, an error prediction window for the predicted ML model, a periodicity of predicting an error for the predicted ML model, and a training accuracy for the predicted ML model (Kuo [0014] provides “a “package” or “machine learning package” may include one or more components that may be used by a connected device and/or may configure a connected device such that the connected device may execute one or more machine learning models and perform one or more actions based on results generated by the one or more models. For example, a connected device download a machine learning package that includes one or more components that the IoT device may install and execute to perform facial recognition based on a machine learning model and to perform one or more actions based on facial recognition results generated by the machine learning model. In some embodiments, machine learning may be implemented using any suitable machine learning/artificial intelligence techniques (e.g., neural networks, deep neural networks, reinforcement learning, decision tree learning, genetic algorithms, classifiers, etc.)). One of ordinary skill in the art before the effective filing date of Applicant’s claimed invention would be motivated to define the content of a machine learning package as taught by Kuo in the system taught by Li-Sachdeva-Bellamkonda, to customize packages based on the inference application, the machine learning framework, the machine learning model, and a hardware platform of the edge device (Kuo Abstract). Regarding claim 3, the method of claim 1, wherein the plurality of parameters corresponding to the at least one network service request comprises: information of service profile of a network, ML requirements of at least one network operator, network traffic pattern for a specific service and unfilled ML templates associated with the specific service (Li [0059-0060]). Regarding claim 4, the method of claim 3, wherein the network traffic pattern for the service is determined by: receiving the information of service profile of the network and the ML requirements of the at least one network operator as inputs (Li claim 5); and determining a plurality of network elements exhibiting same network traffic pattern over a period of time (Li claim 5 provides for historical data). Regarding claim 6, the method of claim 1, wherein each of the at least one ML package further comprises at least one of: one or more libraries related to the predicted ML model or a prediction accuracy for the predicted ML model (Kuo [0029] provides “The machine learning model 112p may process the collected data to generate inference data (e.g., one or more inferences and/or one or more predictions)”). Motivation provided with reference to claim 1. Regarding claim 10, this claim contains limitations found within those of claim 1, and the same rationale of rejection applies, where applicable. Regarding claim 12, this claim contains limitations found within those of claim 3, and the same rationale of rejection applies, where applicable. Regarding claim 13, this claim contains limitations found within those of claim 4, and the same rationale of rejection applies, where applicable. Regarding claim 15, this claim contains limitations found within those of claim 6, and the same rationale of rejection applies, where applicable. Regarding claim 19, this claim contains limitations found within those of claim 1, and the same rationale of rejection applies, where applicable. Claims 5, 7, 9, 14, 16 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al (US 2022/0038349), in view of Sachdeva et al (US 2024/0096155), in view of Bellamkonda et al (US 11,012,872), in view of Kuo et al (US 2019/0156246), further in view of Parvataneni et al (US 2021/0211352). Regarding claim 5, Li-Sachdeva- Bellamkonda-Kuo teaches the method of claim 4, but Li-Sachdeva- Bellamkonda-Kuo does not explicitly teach further comprising: grouping each of the plurality of network elements exhibiting the same network traffic pattern over the period of time; training one among each of the plurality of network elements exhibiting the same network traffic pattern using a specific training model; and instructing an ML orchestrator to train the remaining plurality of network elements exhibiting the same network traffic pattern using the specific training model used by the ML services management controller for training the one network element, wherein the use of the specific training model used by the ML services management controller for training the one network element, to train the remaining plurality of network elements results in saving of ML resources used for training. However, in a similar field of endeavor, Parvataneni teaches grouping each of the plurality of network elements exhibiting the same network traffic pattern over the period of time (Parvataneni [0032-0035]); training one among each of the plurality of network elements exhibiting the same network traffic pattern using a specific training model (Parvataneni [0032-0035]); and instructing an ML orchestrator to train the remaining plurality of network elements exhibiting the same network traffic pattern using the specific training model used by the ML services management controller for training the one network element, wherein the use of the specific training model used by the ML services management controller for training the one network element, to train the remaining plurality of network elements results in saving of ML resources used for training (Parvataneni [0032-0035]). One of ordinary skill in the art before the effective filing date of Applicant’s claimed invention would be motivated to save ML resources used for training to increase overall efficiency of the system. Regarding claim 7, Li-Sachdeva- Bellamkonda-Kuo teaches the method of claim 1, but Li-Sachdeva- Bellamkonda-Kuo does not explicitly teach wherein determining the at least one ML package from the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request comprises: inputting the trigger received from the server and the plurality of parameters corresponding to the at least one network service request to one of a deep reinforcement leaning engine and deep dynamic learning engine; and determining the at least one ML package of the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request by one of the deep reinforcement learning engine and the deep dynamic learning engine. However in a similar field of endeavor, Parvataneni teaches determining the at least one ML package from the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request comprises: inputting the trigger received from the server and the plurality of parameters corresponding to the at least one network service request to one of a deep reinforcement leaning engine and deep dynamic learning engine (Parvataneni [0031-0036]); and determining the at least one ML package of the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request by one of the deep reinforcement learning engine and the deep dynamic learning engine (Parvataneni [0031-0036]). Motivation provided with reference to claim 5. Regarding claim 9, Li-Sachdeva- Bellamkonda-Kuo teaches the method of claim 1, but Li-Sachdeva- Bellamkonda-Kuo does not explicitly teach further comprising: monitoring a plurality of network service requests from the server; identifying one or more network service requirements associated with each of the network service requests; monitoring one or more machine learning packages deployed from an ML model repository in response to each of the network service requests from the plurality of network service requests; generating a co-relation between each of the network service request, the corresponding network service requirements and the one or more machine learning packages deployed from the ML model repository for optimization of each network service over a period of time; receiving an incoming network service request; and deploying the ML package corresponding to the network service requirements of the incoming network service request based on the generated co-relation. However, in a similar field of endeavor, Parvataneni teaches monitoring a plurality of network service requests from the server (Parvataneni [0050-0056]); identifying one or more network service requirements associated with each of the network service requests (Parvataneni [0050-0056]); monitoring one or more machine learning packages deployed from an ML model repository in response to each of the network service requests from the plurality of network service requests (Parvataneni [0050-0056]); generating a co-relation between each of the network service request, the corresponding network service requirements and the one or more machine learning packages deployed from the ML model repository for optimization of each network service over a period of time (Parvataneni [0050-0056]); receiving an incoming network service request (Parvataneni [0050-0056]); and deploying the ML package corresponding to the network service requirements of the incoming network service request based on the generated co-relation (Parvataneni [0050-0056]). Motivation provided with reference to claim 5. Regarding claim 14, this claim contains limitations found within those of claim 5, and the same rationale of rejection applies, where applicable. Regarding claim 16, this claim contains limitations found within those of claim 7, and the same rationale of rejection applies, where applicable. Regarding claim 18, this claim contains limitations found within those of claim 9, and the same rationale of rejection applies, where applicable. Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al (US 2022/0038349), in view of Sachdeva et al (US 2024/0096155), in view of Bellamkonda et al (US 11,012,872), in view of Kuo et al (US 2019/0156246), further in view of Siracusa et al (US 2020/0380301). Regarding claim 8, Li-Sachdeva- Bellamkonda-Kuo has taught the method of claim 6, but Li-Sachdeva- Bellamkonda-Kuo does not explicitly teach further comprising: filling values corresponding to the determined at least one ML package in at least one unfilled ML template associated with the specific service. However, in a similar field of endeavor, Siracusa teaches filling values corresponding to the determined at least one ML package in at least one unfilled ML template associated with the specific service (Siracusa [0004] provides “The machine learning (ML) templates can allow a developer to easily create a customized model without having to program a separate machine learning model. As examples, the training data can be obtained from storage or can be integrated from live capture/recordings of images or sound. Both stored and live data can be used for training and testing the model, all within the application”). One of ordinary skill in the art before the effective filing date of Applicant’s claimed invention would be motivated to use the feature of ML templates as it would allow a developer to easily create a customized model without having to program a separate machine learning model (Siracusa [0004]). Regarding claim 17, this claim contains limitations found within those of claim 8, and the same rationale of rejection applies, where applicable. Claims 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al (US 2022/0038349), in view of Sachdeva et al (US 2024/0096155), in view of Bellamkonda et al (US 11,012,872), in view of Kuo et al (US 2019/0156246), further in view of Bhide et al (US 2021/0295204). Regarding claim 20, Li-Sachdeva- Bellamkonda-Kuo has taught the method of claim 1 including ML packages, but Li-Sachdeva- Bellamkonda-Kuo does not explicitly teach wherein each package comprises at least one of: a predicted requirement of the network resources for implementing a ML technique, a predicted optimal ML model and related one or more libraries, an error prediction window, a periodicity of predicting the error, and a training accuracy or a prediction accuracy. However, in a similar field of endeavor, Bhide teaches a prediction accuracy (Bhide [0023], [0025] provides for prediction accuracy in Machine Learning). One of ordinary skill in the art before the effective filing date of Applicant’s claimed invention would be motivated to use the feature of prediction accuracy to counter a drop in accuracy and improve model accuracy by re-training the model with new training data (Bhide [0002]). Regarding claim 21, this claim contains limitations found within those of claim 20, and the same rationale of rejection applies, where applicable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Cmielowski et al US 2022/0114401. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHRAT RASHID whose telephone number is (571)272-5372. The examiner can normally be reached 10AM-6PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tonia L Dollinger can be reached at 571-272-4170. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.R/Examiner, Art Unit 2459 /SCHQUITA D GOODWIN/Primary Examiner, Art Unit 2459
Read full office action

Prosecution Timeline

Jul 13, 2022
Application Filed
Jul 27, 2024
Non-Final Rejection — §102, §103, §112
Sep 11, 2024
Interview Requested
Oct 03, 2024
Applicant Interview (Telephonic)
Oct 05, 2024
Examiner Interview Summary
Nov 04, 2024
Response Filed
Dec 14, 2024
Final Rejection — §102, §103, §112
Jan 16, 2025
Request for Continued Examination
Jan 23, 2025
Response after Non-Final Action
Mar 21, 2025
Non-Final Rejection — §102, §103, §112
Jun 26, 2025
Response Filed
Oct 12, 2025
Final Rejection — §102, §103, §112
Dec 17, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603930
CONTENT DELIVERY
2y 5m to grant Granted Apr 14, 2026
Patent 12598109
NETWORK PERFORMANCE EVALUATION USING AI-BASED NETWORK CLONING
2y 5m to grant Granted Apr 07, 2026
Patent 12587586
REDUCING LATENCY AND OPTIMIZING PROXY NETWORKS
2y 5m to grant Granted Mar 24, 2026
Patent 12587593
DATA TRANSMISSION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12562993
PACKET FRAGMENTATION PREVENTION IN AN SDWAN ROUTER
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
58%
Grant Probability
78%
With Interview (+19.9%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 198 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month