Prosecution Insights
Last updated: April 19, 2026
Application No. 18/575,792

ENHANCED ON-THE-GO ARTIFICIAL INTELLIGENCE FOR WIRELESS DEVICES

Non-Final OA §103
Filed
Dec 29, 2023
Examiner
AHMED, ATIQUE
Art Unit
2413
Tech Center
2400 — Computer Networks
Assignee
Intel Corporation
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
369 granted / 460 resolved
+22.2% vs TC avg
Strong +16% interview lift
Without
With
+15.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
37 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
66.6%
+26.6% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 460 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This office action is a response to an application filed on 12/29/2023 where claims 1-20 are pending. Claims 21-25 are cancelled. Information Disclosure Statement 3. The information disclosure statement (IDS) submitted on 12/29/2023 has been considered by the examiner. The submission is in compliance with the provisions of 37CFR 1.97. Drawings 4. The drawings were received on 12/29/2023. These drawing are acceptable. Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1,2, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki et al. (US 20220294548 A1)hereinafter Pezeshki and further in view of Klein et al. (US 20230020939 A1 ) hereinafter Klein As to claim 1 Pezeshki teaches An apparatus of a radio access network (RAN) device for facilitating machine learning-based operations, the apparatus comprising processing circuitry coupled to storage, the processing circuitry configured to ([0078] Fig. 4, base station facilitates machine learning) identify a first request, received from a user equipment (UE) device, for a machine learning model configuration; ([0077][0078] Fig. 4, a UE may request a neural network for an ML model parameters) determine a location of the UE device; ([0089] Fig. 7, the base station 702 receives the zone ID and the associated ML model parameters from the first UE 704 at 722, the base station 702 may determine ML model parameters associated with the geographic area identified by the zone ID, which may be an area in which the first UE 704 is located) select, based on the first request and the location, ([0078] [0089] UE may request a neural network for an ML mode and UE location) 0090] Fig. 7, the base station 702 may determine whether the received ML model parameters/agent, are applicable to one or more UEs in a geographical zone in which the first UE 704 is located ) format a response to the first request, the response comprising the machine learning configuration for the UE device. ([0091] Fig. 7, At 734, the base station 702 may provide/share one or more ML model parameters received from one or more UEs to other UEs, such as UEs in which one or more ML model parameters may apply and/or UEs that have requested ML model update) Pezeshki does not teach an available machine learning agent; format a second request to the available machine learning agent for the machine learning configuration; identify the machine learning configuration received from the available machine learning agent based on the second request; and Klein teaches , an available machine learning agent;([0033] machine learning cores/agent or modules across multiple network devices of a multi-layered network e.eg available ML agent ). format a second request to the available machine learning agent for the machine learning configuration; [0033] network take intelligent routing decision at runtime to service client requests e.g., format second requests, to machine learning cores OR /agent or modules across multiple network devices of a multi-layered network e.g., available ML agent, machine learning models and hyperparameters AND determine the machine learning capabilities) identify the machine learning configuration received from the available machine learning agent based on the second request; and([0049][0050] he network device 116 may determine that the request 150 requests processing an object in a video stream and that its machine learning core 118 (or machine learning core 108, based on comparing the attributes (i.e., the machine learning capabilities and/or computer resource characteristics) for the different network devices,) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Klein with the teaching of Pezeshki because Klein teaches that using multiple network devices including machine learning cores or modules, capability information of which is shared across the network to route clients request thereby improving throughput and latency. (Klein [0035]) Claim 19 is/are interpreted and rejected for the same reasons as set forth in claim 1. As to claim 2the combination of Pezeshki and Klein specifically Klein teaches , wherein the first request comprises an indication of a task associated with at least one of movement of the UE device, a quality-of-service recommendation, energy efficiency, inferencing accuracy, or communication delay, and wherein to select the available machine learning agent is further based on the task. ([0035]client requests that require machine learning processing, where heavy machine learning processing is required, for example, a first portion of a client request can be processed by a machine learning core at a CPE layer based on a CPE device's machine learning capabilities and then a second portion of the request can be processed at the cloud layer) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Klein with the teaching of Pezeshki because Klein teaches that using multiple network devices including machine learning cores or modules, capability information of which is shared across the network to route clients request thereby improving throughput and latency. (Klein [0035]) Claim 20 is/are interpreted and rejected for the same reasons as set forth in claim 2. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki, Klein and further in view of Mark et al. (US 20200371509 A1)hereinafter Mark As to claim 3. the combination of Pezeshki and Klein does not teach wherein the processing circuitry is further configured to: identify an update to the machine learning configuration, the update received from the available machine learning agent; and format the update to the machine learning configuration to transmit to the UE device. Mark wherein the processing circuitry is further configured to: identify an update to the machine learning configuration,([0016] determining a first machine learning parameter based on at least the first physical sensor data; and transmitting the first machine learning parameter to a machine learning hub system machine learning hub system is configured to update a multi-tenant machine learning model based at least on the first machine learning parameter.) the update received from the available machine learning agent; ([0051][0059]Fig. 2 multi-tenant engine 216/machine learning engine, may be configured to algorithmically combine the received machine learning parameters, to update the parameters of the multi-tenant model 218) . and format the update to the machine learning configuration to transmit to the UE device ([0059] Fig. 2, hub system 214 may be configured to transmit updated machine learning parameters to the spoke systems/spoke system associated with the tenant,) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Mark with the teaching of Pezeshki and Klein because Mark teaches that , multi-tenant machine learning determines optimizations relating to fiber placement confirmation, provide Improved correspondence between specifications and results may reduce the cost. (Mark [0075]) Claim(s) 4, 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki, Klein and further in view of Wang et al. (US 20220353803 A1)hereinafter Wang As to claim 4. the combination of Pezeshki and Klein does not teach wherein the processing circuitry is further configured to: determine a second location of the UE device; select, based on the second location; a second available machine learning agent; format a third request for a second machine learning configuration to transmit to the second available machine learning agent, identify the second machine learning configuration received from the second available machine learning agent based on the third request; and format a second response to the first request, the second response comprising the second machine learning configuration for the UE device. Wang teaches wherein the processing circuitry is further configured to: determine a second location of the UE device; ([0108] UE moves to a second location) select, based on the second location; a second available machine learning agent; ([0108] Fig. 1, Fig. 6, Fig. 8, network-slice manager 190 can transmit a second available machine-learning architecture message 816 to inform the UE 110 of the latest available machine-learning architectures 606 associated with the network slice) . format a third request for a second machine learning configuration to transmit to the second available machine learning agent, ([0108] [0117] ] Fig. 1, Fig. 4, Fig. 8, machine-learning architecture request message 832,/third request, network-slice manager 190 can transmit a second available machine-learning architecture message 816 to inform the UE 110 of the latest available machine-learning architectures 606 associated with the network slice 400) identify the second machine learning configuration received from the second available machine learning agent based on the third request; ([0108] [0117] ] Fig. 1, Fig. 4, Fig. 8, machine-learning architecture request message 832,/third request, the network-slice manager 190 grants the UE 110 permission to use the machine-learning architecture 210 if the machine-learning architecture 210 is one of the machine-learning architectures 211, 212, or 213 associated with the network slice 400.) and format a second response to the first request, the second response comprising the second machine learning configuration for the UE device. ( [0146][0148][0149] the selecting of the second machine-learning architecture is based on the first requested quality-of-service level: the selecting of the second machine-learning architecture is based on the first requested quality-of-service level; the user equipment moving to a geographical location associated with a different base station or a different tracking area) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Wang with the teaching of Pezeshki and Klein because Wang teaches that sharing available machine-learning architectures with the UE, network-slice manager would allow UE to efficiently determine an appropriate machine-learning architecture that satisfies the requested quality-of-service level of the application. Wang [0095]) As to claim 5. the combination Pezeshki and Klein does not teach wherein the processing circuitry is further configured to: identify a second request, received from a second UE device, for a second machine learning model configuration; determine a second location of the second UE device; select, based on the second request and the second location, a second available machine learning agent; and format, for transmission to the UE device, an indication of the second available machine learning agent from which the UE device may request the second machine learning model configuration. Wang teaches wherein the processing circuitry is further configured to: identify a second request, received from a second UE device, ([0066][0142] Fig. 1, Fig. 4, second UE, transmitting, to the network-slice manager, a second machine-learning architecture request message to request permission to use the second machine-learning architecture ) for a second machine learning model configuration;([0066] Fig. 1, Fig. 4, a second UE 112 with limited available power can operate with the machine-learning architecture 212 but not the machine-learning architecture 211) determine a second location of the second UE device; ([0108] second UE 112, moves to a different location or tracking area) select, based on the second request and the second location, a second available machine learning agent;([0109] Fig. 1, Fig. 4,, Fig. 6, UE 112 selects the machine-machine learning architecture 212 associated with the end-to-end machine-learning architecture 202. ) and format, for transmission to the UE device, an indication of the second available machine learning agent from which the UE device may request the second machine learning model configuration. ([0108] Fig. 1, Fig. 4,, Fig. 6, Fig. 8, network-slice manager 190 can transmit a second available machine-learning architecture message 816 to inform the UE 110 of the latest available machine-learning architectures 606 associated with the network slice 400) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Wang with the teaching of Pezeshki and Klein because Wang teaches that sharing available machine-learning architectures with the UE, network-slice manager would allow UE to efficiently determine an appropriate machine-learning architecture that satisfies the requested quality-of-service level of the application. (Wang [0095]) Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki, Klein and further in view of Todd et al. (US 20160371396 A1)hereinafter Todd As to claim 6 the combination Pezeshki and Klein does not teach wherein the RAN device is associated with a network architecture associated with multiple network security domains each indicative of a respect data privacy trust level. Todd teaches wherein the RAN device is associated with a network architecture associated with multiple network security domains each indicative of a respect data privacy trust level. ([0035] Fig. 3 each trust dimension further contains domains: availability and recoverability trust dimension 311 has domains including, but not limited to, data availability, business continuity, quality and reliability, and operational resilience; security, privacy and compliance trust dimension 312 has domains) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Todd with the teaching of Pezeshki and Klein because Todd teaches data privacy trust providing dynamic, trusted placement, thus enabling data scientists and data engineers to maintain compliance with trust and veracity requirements/preferences associated with the data sets from which the results are derived. (Todd [0007]) Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki, Klein, Todd and in view of Dao et al. (US 20210274392 A1)hereinafter Dao and further in view of Pan (US 20210150521 A1)hereinafter Pan As to claim 7 the combination of Pezeshki, Klein, Todd does not teach wherein the network architecture comprises a first network exposure function (NEF) associated with a first data privacy trust level, and a second NEF associated with a second data privacy trust level. Dao teaches wherein the network architecture comprises a first network exposure function (NEF), and a second NEF. ([0013] Fig. 2, a first network exposure function (NEF) and a second NEF e.g., source S-NEF and target T-NEF) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Dao with the teaching of Pezeshki, Klein and Todd because Dao teaches that when reselecting S-NEF would provide a means for reducing communication network latency during reselection of a NEF, for example transferring from a S-NEF to a T-NEF. (Dao [0170]). the combination of Pezeshki, Klein, Todd and Dao does not teach associated with a first data privacy trust level, associated with a second data privacy trust level Pan teaches associated with a first data privacy trust level, associated with a second data privacy trust level([0043][0049] Fig. 1, Fig. 2, trusted users store the privacy-unprotected first data information included in the first message in local databases of the node devices; the first digital signature is made by the blockchain user at least for the privacy-unprotected first data information, or verify that the second digital signature is made by the trusted user at least for the privacy-protected second data information,) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Pan with the teaching of Pezeshki, Klein, Todd and Dao because Pan teaches that uploading by trusted user privacy-protected second data information to the distributed database of the blockchain, so that consensus verification is performed on the privacy-protected second data information on the blockchain.(Pan [0011]) Claim(s) 10, 11, 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki, Klein Todd and further in view of Atwal et al. (US 20220247678 A1) hereinafter Atwal As to claim 10. the combination Pezeshki, Klein, and Todd does not tech wherein the network architecture comprises a Zero-Trust architecture. Atwal teaches wherein the network architecture comprises a Zero-Trust architecture. ([0052] platform may also include Zero Trust Mobile Network (ZTMN) features built on an architecture that accommodates key security enhancement) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Atwal with the teaching of Pezeshki, Klein and Todd because Atwal teaches that enterprise virtual private cloud (VPC) architecture platform include a Mobile Network-as-a-Service (MNaaS) features that would provide full control of the entire mobile network lifecycle to dynamically enable multiple mobile networks on a pay-as-you-go, subscription basis, or combinations thereof. (Atwal [0052]) As to claim 11. the combination Pezeshki, Klein, and Todd does not tech, wherein the network architecture comprises a policy enforcement device associated with access control for machine learning requests comprising the first request. Atwal teaches wherein the network architecture comprises a policy enforcement device associated with access control for machine learning requests comprising the first request. ([0070]zero-trust architecture may drive the design and operations of the ZTMN. Beyond designing the ZTMN itself based on zero-trust policies/access control, , the ZTMN's architecture may also enable an enterprise to extend its own zero-trust security policies to each tenant network including an entire private version of the 5G packet core and all the mobile assets connected to it) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Atwal with the teaching of Pezeshki, Klein and Todd because Atwal teaches that enterprise virtual private cloud (VPC) architecture platform include a Mobile Network-as-a-Service (MNaaS) features that would provide full control of the entire mobile network lifecycle to dynamically enable multiple mobile networks on a pay-as-you-go, subscription basis, or combinations thereof. (Atwal [0052]) As to claim 12 the combination Pezeshki, Klein, and Todd specifically Klein teaches wherein the policy enforcement device is further associated with selecting a machine learning task-based([0033] that an intelligent routing decision can be made at runtime to service client requests, gateway and edge devices in a broadband access network may employ different machine learning models with different hyperparameters and broadcast an indication of the machine learning models and hyperparameters to other network devices so that those other network devices can determines machine learning capabilities or attributes of other network devices) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Klein with the teaching of Pezeshki because Klein teaches that using multiple network devices including machine learning cores or modules, capability information of which is shared across the network to route clients request thereby improving throughput and latency. (Klein [0035]) the combination Pezeshki, Klein, and Todd does not teach, data privacy trust level. Atwal teaches data privacy trust level. ([0070][0075] zero-trust policies, the ZTMN's architecture may also enable an enterprise to extend its own zero-trust security policies to each tenant network; each tenant network may maintain its own private control and data planes, which may be shown to result in extraordinary control, privacy, data sovereignty) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Atwal with the teaching of Pezeshki, Klein and Todd because Atwal teaches that enterprise virtual private cloud (VPC) architecture platform include a Mobile Network-as-a-Service (MNaaS) features that would provide full control of the entire mobile network lifecycle to dynamically enable multiple mobile networks on a pay-as-you-go, subscription basis, or combinations thereof. (Atwal [0052]) Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki, Klein Todd, Atwal and further in view of Brinkman et al. (US 20200092296 A1) hereinafter Brinkman As to claim 13. the combination Pezeshki, Klein, Todd and Atwal does not teach , wherein the policy enforcement device is further associated with assigning an OAuth access token associated with the first request. Brinkman teaches wherein the policy enforcement device is further associated with assigning an OAuth access token associated with the first request. ([0046] The policy enforcement subsystem 370 may store the application authentication indicator. In some embodiments, the application authentication indicator is indicative of policies to apply to the client device. For example, in some embodiments, the application authentication indicator corresponds to an access token including valet keys (e.g., OAuth valet keys) defining an authorization profile for the client device) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Brinkman with the teaching of Pezeshki, Klein, Todd and Atwal because Brinkman teaches that the policy enforcement subsystem is integrated within a RADIUS server, and the policy enforcement subsystem thereby facilitating sharing the application authentication indicator with other peer RADIUS servers.(Brinkman [0046]) Claim(s) 14 -18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pezeshki, and Wang As to claim 14. Pezeshki, teaches A non-transitory computer-readable storage medium comprising instructions to cause processing circuitry of a user equipment device (UE) device, upon execution of the instructions by the processing circuitry, to ([0043]Fig. 2,UE including processor , memory with stored instructions, when processor executes instructions) format a first request for a machine learning configuration to transmit to a radio access network (RAN) device; ([0077][0078] Fig. 4, a UE may request a neural network for an ML model parameters) identify a response received from the RAN device based on the first request, . ([0091] Fig. 7At 734, the base station 702 may provide/share one or more ML model parameters received from one or more UEs to other UEs, such as UEs in which one or more ML model parameters may apply and/or UEs that have requested ML model update) the response comprising the machine learning configuration or an indication of an available machine learning agent from which to request the machine learning configuration; . ([0091] Fig. 7At 734, the base station 702 may provide/share one or more ML model parameters received from one or more UEs to other UEs, such as UEs in which one or more ML model parameters may apply and/or UEs that have requested ML model update) Pezeshki, does not teach and update a machine learning model of the UE device based on the machine learning configuration. Wang teaches and update a machine learning model of the UE device based on the machine learning configuration. ([0109] Fig. 1, Fig. 2, Fig. 4, Fig. 8, The network-slice manager 190 transmits the available machine-learning architecture message 816 to inform the UE 110 of an update to the available machine-learning architectures 606. Based on this update, the UE 110 selects the machine-learning architecture 212 associated with the end-to-end machine-learning architecture 202.) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Wang with the teaching of Pezeshki because Wang teaches that sharing available machine-learning architectures with the UE, network-slice manager would allow UE to efficiently determine an appropriate machine-learning architecture that satisfies the requested quality-of-service level of the application. (Wang [0095]) As to claim 15. The combination of Pezeshki, and Wang Specifically Wang teaches wherein the first request is transmitted by the UE device at a first time from a first location, and wherein execution of the instructions further causes the processing circuitry to: ([0010] UE send first request from geographical location in first time) format a second request for a second machine learning configuration to transmit to the RAN device; , ([0049][0066][0142] Fig. 1,Fig. 2, Fig. 4, second UE, transmitting, to the network-slice manager, a second machine-learning architecture request message to request permission to use the second machine-learning architecture; machine-learning architectures 212 ) identify a second response received from the RAN device based on the second request, ( [0146][0148][0149] the selecting of the second machine-learning architecture is based on the first requested quality-of-service level: the selecting of the second machine-learning architecture is based on the first requested quality-of-service level; the user equipment moving to a geographical location associated with a different base station or a different tracking area) the second response comprising the second machine learning configuration or a second indication of a second available machine learning agent from which to request ([0108] Fig. 1, Fig. 4,, Fig. 6, Fig. 8, network-slice manager 190 can transmit a second available machine-learning architecture message 816 to inform the UE 110 of the latest available machine-learning architectures 606 associated with the network slice 400) the second machine learning configuration. ;([0066] Fig. 1, Fig. 4, a second UE 112 with limited available power can operate with the machine-learning architecture 212 but not the machine-learning architecture 211) determine a second location of the second UE device; ([0108] second UE 112, moves to a different location or tracking area) ; and updated a machine learning model of the UE device further based on the second machine learning configuration. . ([0109] Fig. 1, Fig. 2, Fig. 4, Fig. 8, The network-slice manager 190 transmits the available machine-learning architecture message 816 to inform the UE 110 of an update to the available machine-learning architectures 606. Based on this update, the UE 110 selects the machine-learning architecture 212/second ML, associated with the end-to-end machine-learning architecture 202.) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Wang with the teaching of Pezeshki because Wang teaches that sharing available machine-learning architectures with the UE, network-slice manager would allow UE to efficiently determine an appropriate machine-learning architecture that satisfies the requested quality-of-service level of the application. (Wang [0095]) As to claim 16. The combination of Pezeshki, and Wang Specifically Wang teaches wherein the response comprises the indication, and wherein execution of the instructions further causes the processing circuitry to: ([0108] Fig. 1, Fig. 4,, Fig. 6, Fig. 8, network-slice manager 190 can transmit a second available machine-learning architecture message 816 to inform the UE 110 of the latest available machine-learning architectures 606 associated with the network slice 400) format a second request to transmit to the available machine learning agent; ( [0146][0148][0149] the selecting of the second machine-learning architecture is based on the first requested quality-of-service level: the selecting of the second machine-learning architecture is based on the first requested quality-of-service level; the user equipment moving to a geographical location associated with a different base station or a different tracking area) and identify a second response received from the available machine learning agent, the second response comprising the machine learning configuration, ( [0146][0148][0149] the selecting of the second machine-learning architecture is based on the first requested quality-of-service level: the selecting of the second machine-learning architecture is based on the first requested quality-of-service level; the user equipment moving to a geographical location associated with a different base station or a different tracking area) wherein to update the machine learning model is further based on the second response. . ([0109] Fig. 1, Fig. 2, Fig. 4, Fig. 8, The network-slice manager 190 transmits the available machine-learning architecture message 816 to inform the UE 110 of an update to the available machine-learning architectures 606. Based on this update, the UE 110 selects the machine-learning architecture 212/second ML, associated with the end-to-end machine-learning architecture 202.) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Wang with the teaching of Pezeshki because Wang teaches that sharing available machine-learning architectures with the UE, network-slice manager would allow UE to efficiently determine an appropriate machine-learning architecture that satisfies the requested quality-of-service level of the application. (Wang [0095]) As to claim 17 The combination of Pezeshki, and Wang Specifically Pezeshki, teaches , wherein execution of the instructions further causes the processing circuitry to([0159]Fig. 11, the wireless communication device, inherently includes processor, instructions stored in memory is configured to perform operations, including operations of the process) Pezeshki, does not teach identify an update to the machine learning configuration received from the RAN device or the available machine learning agent; and update the machine learning model based on the update to the machine learning configuration. Wang teaches identify an update to the machine learning configuration received from the RAN device or the available machine learning agent; ([0091] Fig. 7, At 734, the base station 702 may provide/share one or more ML model parameters received from one or more UEs to other UEs, such as UEs in which one or more ML model parameters may apply and/or UEs that have requested ML model update) and update the machine learning model based on the update to the machine learning configuration. . ([0109] Fig. 1, Fig. 2, Fig. 4, Fig. 8, The network-slice manager 190 transmits the available machine-learning architecture message 816 to inform the UE 110 of an update to the available machine-learning architectures 606. Based on this update, the UE 110 selects the machine-learning architecture 212/second ML, associated with the end-to-end machine-learning architecture 202.) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Wang with the teaching of Pezeshki because Wang teaches that sharing available machine-learning architectures with the UE, network-slice manager would allow UE to efficiently determine an appropriate machine-learning architecture that satisfies the requested quality-of-service level of the application. (Wang [0095]) As to claim 18. The combination of Pezeshki, and Wang Specifically Wang teaches wherein the first request comprises an indication of a task associated with at least one of movement of the UE device, a quality-of-service recommendation, energy efficiency, inferencing accuracy, or communication delay, and wherein to select the available machine learning agent is further based on the task. ([0038] Network slicing enables a wireless communication network to satisfy a diverse set of quality-of-service (QoS) levels.) Therefore, it would have been obvious to one of ordinary skills in the art before the effective filling date of the claimed invention to combine teaching of Wang with the teaching of Pezeshki because Wang teaches that sharing available machine-learning architectures with the UE, network-slice manager would allow UE to efficiently determine an appropriate machine-learning architecture that satisfies the requested quality-of-service level of the application. (Wang [0095]) Allowable Subject Matter 6. Claims 8, 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 7, 8, 9 prior art Heath et al. [US 20130268357 A1] teaches in para [0127] Role of firewalls in Internet security and/or web security: Firewalls impose restrictions on incoming and/or outgoing packets to and/or from private networks. All the traffic, whether incoming or outgoing, must pass through the firewall; only authorized traffic is allowed to pass through it. Firewalls create checkpoints between an internal private network and/or a public Internet, also known as choke points. Firewalls can create choke points based on IP source and/or TCP port number. They can also serve as the platform for IPsec. Using tunnel mode capability, firewall can be used to implement VPNs. Firewalls can also limit network exposure by hiding the internal network system and/or information from a public Internet. And prior art Chunduri et al. [US 20210204162 A1] discloses in para [0057] The NEF 256 is an interface for external applications and provides secure and controlled access to network functions, network analytics, and other private network information, for example by masking and translating secure network data into a form that is not secure/sensitive. The SMF 234 manages communication sessions for the UE 201, for example by managing addresses, performing traffic steering, establishing/terminating sessions, and providing corresponding notifications. The SMF 234 can communicate with the BP UPF 242 and/or the AP UPF 241 via a fourth interface (N4). The AF 255 interacts with the core network to provide external applications secure access to influence traffic routing, to the NEF 256, and to the network policy framework to support policy control at the PCF 238. As noted above, the 5G virtualized control plane 200 provides these services as well as others for the UE 201, and hence provides the overall management functions that inform the UE 201, as well as other network nodes, how communication with the network should occur. And prior art Castle et al. [US 20210152455 A1] disclose in para [0008] The virtual service engines and associated software applications and platforms can include, but are not limited to: (i) a Platinum 1 Functionality Module that includes software for implementing a secure data connection, such as a virtual private network connection; (ii) a Router Functionality Module; (iii) a Firewall Functionality Module; (iv) a network traffic optimizer, or “WAN Optimizer,” Functionality Module; (v) a Platinum 0 Functionality Module that captures network flow data and that implements a Secure Remote Monitoring and Management (“SRM2”) Application Monitor software application that captures function status network monitoring data that is used to generate Business Critical Service metrics, as explained more fully below; (vi) a User Experience Engine/Monitoring Engine and a Polling Engine Functionality Module that in part implements a network monitoring software platform (“NMS Platform”); (vii) Polling Engine Functionality Module that in part implements the network monitoring software platform; However, combination or prior arts records Heath, Chunduri and Castle does not teach For claim 8 wherein the first data privacy trust level is greater than the second data privacy trust level, wherein the machine learning agent is associated with the first NEF, wherein all first training data for a second machine learning agent associated with the second NEF is available to the machine learning agent, and where a subset of second training data for the machine learning agent is unavailable to the second machine learning agent. For claim 9 wherein the first data privacy trust level is greater than the second data privacy trust level, wherein the location is associated with the first NEF, wherein all first inferencing outputs, based on the machine learning configuration, from a second UE device at a second location associated with the second NEF are available to the machine learning agent, and where a subset of inferencing outputs from the UE device, based on the machine learning configuration, is unavailable to a second machine learning agent associated with the second NEF. Therefore, claims 8 and 9 would be allowable if rewritten or amended to overcome the objections set forth in this office action and in independent form including all of the limitations of the base claim and any intervening claims. Dependent claims of claim 8 would also be allowable for the same above reasons. Conclusion 7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. ZHANG; Hang [US 20210286896 A1] METHODS AND SYSTEMS FOR DATA MANAGEMENT IN COMMUNICATION NETWORK XIN; Yang et al. [US 20230083982 A1] COMMUNICATION METHOD, APPARATUS, AND SYSTEM ZHU; Haoren et al. [US 20230379700 A1] SECURITY PARAMETER OBTAINING METHOD, APPARATUS, AND SYSTEM Any inquiry concerning this communication or earlier communications from the examiner should be directed to ATIQUE AHMED whose telephone number is (571)272-6244. The examiner can normally be reached 9:30 - 7:30 PM M-F Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Un Cho can be reached at 5712727919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ATIQUE AHMED/Primary Examiner, Art Unit 2413
Read full office action

Prosecution Timeline

Dec 29, 2023
Application Filed
Feb 01, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598537
METHODS FOR CELL ACCESS AND DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12593365
USER EQUIPMENT AND METHOD IN A WIRELESS COMMUNICATIONS NETWORK
2y 5m to grant Granted Mar 31, 2026
Patent 12587917
MANAGEMENT METHOD, DEVICE AND STORAGE MEDIUM FOR CELL HANDOVER
2y 5m to grant Granted Mar 24, 2026
Patent 12587943
SIGNAL TRANSMISSION METHOD AND APPARATUS, ACCESS NODE, PROCESSING UNIT, SYSTEM AND MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12587345
WIRELESS COMMUNICATION METHOD USING TRIGGER INFORMATION, AND WIRELESS COMMUNICATION TERMINAL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
96%
With Interview (+15.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 460 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month