Prosecution Insights
Last updated: April 18, 2026
Application No. 18/324,038

Enhancements for Distributed Machine Learning Models in Wireless Communication Systems

Non-Final OA §103
Filed
May 25, 2023
Examiner
NGUYEN, MINH TRANG T
Art Unit
2477
Tech Center
2400 — Computer Networks
Assignee
Apple Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
95%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
795 granted / 882 resolved
+32.1% vs TC avg
Moderate +5% lift
Without
With
+5.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
19 currently pending
Career history
901
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
40.5%
+0.5% vs TC avg
§102
37.3%
-2.7% vs TC avg
§112
5.1%
-34.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 882 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ly et al (US 2022/0101204) (Hereinafter Ly) in view of Bliss et al (US 2023/0010215) (hereinafter Bliss). Regarding claim 1, Ly discloses an apparatus (e.g., Fig. 10, e.g., an apparatus 1002, p. [0241]), comprising: at least one processor (e.g., 1008) configured to cause a user equipment (UE) (see Ly, p. [0038], e.g., UE 120) to: receive, from a network node (e.g., base station), one or more reference signals (see Ly, p. [0046-0047], e.g., a base station transmits reference signals to a user equipment); perform, using the one or more reference signals, one or more measurements (see Ly, p. [0057], e.g., a client device (e.g., UE) performs measurements associated with the reference signals); compress the one or more measurements into one or more measurement results (see Ly, p. [0058], e.g., a client device (e.g., UE) may use one or more machine learning components (e.g., neural networks) that may be trained to learn dependence of measured qualities on individual parameters, isolate the measured qualities through various layers of the one or more machine learning components, and compress measurements in a way that limits compression loss); transmit, to a server (e.g., a TRP, another UE, and/or a base station), the one or more measurement results (see Ly, p. [0058],[0061] and [0075] e.g., The client device may transmit the compressed measurements to the server device); receive, from the server, the at least one of the one or more IDs or one or more models, wherein the one or more IDs or one or more models are provided based on the one or more measurement results (see Ly, p. [0061],[0065-0066], e.g., the server device sends the neural network model to the client devices. Each client device trains the received neural network model using its own data and sends back an updated neural network model to the server device. The server device averages the updated neural network models from the client devices to obtain a new neural network model). However, Ly does not expressly disclose request, from the server, at least one of one or more identifiers (IDs) or one or more models associated with the one or more IDs; transmit, to the network node, an ordered list of the one or more IDs; receive, from the network node, a response indicating selection of an ID of the one or more IDs; and communicate, using the ID, with the network node. Bliss discloses the above recited limitations. In particular, Bliss discloses request, from the server, at least one of one or more identifiers (IDs) or one or more models associated with the one or more IDs (see Bliss, p. [0054-0055], [0045-0046], e.g., commissioning service 105 may receive the request transmitted from device 110 (e.g., directly from device 110 and/or via server 104), and commissioning service 105 may identify and/or obtain the network ID assigned to device 110 (e.g., by using the content of the request) (STEP 212A)); transmit, to the network node (e.g., commissioning service 105), an ordered list of the one or more IDs (see Bliss, p. [0039], e.g., commissioning service 105 imports the network identifiers and generates an ordered table comprising the network IDs); receive, from the network node, a response indicating selection of an ID of the one or more IDs (see Bliss, p. [0045-0046], e.g., commissioning service 105 may generate a response that includes the network ID for device 110, and send the response (e.g., via server 104) to device 110); and communicate, using the ID, with the network node (see Bliss, p. [0045-0046], e.g., device 110 may receive the response from commissioning service 105 (STEP 208B). Subsequent to receiving the response, device 110 may store the network ID locally (STEP 210B)). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bliss’s teachings into Ly. The suggestion/motivation would have been to provide the content of the request to correlate the network ID's of the ordered list to device as suggested by Bliss. Regarding claim 2, the combined teaching of Ly and Bliss disclose the apparatus of claim 1, wherein the one or more reference signals are channel state information-reference signals (CSI-RS) (see Ly, p. [0074], e.g., a reference signal such as a channel state information reference signal (CSI-RS)). Regarding claim 3, the combined teaching of Ly and Bliss disclose the apparatus of claim 1, wherein the UE is configured to transmit the one or more measurements results to the server while operating according to at least one of the following conditions: while connected to external electrical power (see Bliss, Fig.2B, p. [0042], e.g., Device 110 may be powered-on, booted, and/or re-booted subsequent and/or in response to the transmission of the prompt (e.g., by a user) (STEP 202B)); while connected to Wi-Fi; while operating in high fidelity signal conditions; or during pauses in application activity. Regarding claim 4, the combined teaching of Ly and Bliss disclose the apparatus of claim 1, wherein the one or more measurement results include metadata corresponding to the one or more IDs, and wherein the metadata indicates at least one of: a training status of the one or more IDs (see Bliss, p. [0076-0077], e.g., the client device 302 may collect training data and store it in a memory device 334. The stored training data may be referred to as a “local dataset.”); a functionality, object, input, or output of the one or more IDs; latency benchmarks, memory requirements, or accuracy of the one or more IDs; a compression status of the one or more IDs; inferencing or operating conditions of the one or more IDs; or pre-processing and post-processing information of the one or more measurements. Regarding claim 5, the combined teaching of Ly and Bliss disclose the apparatus of claim 1, wherein the one or more measurements comprise at least one of: one or more channel state information (CSI) measurements; or one or more beam sweeping measurements (see Ly, p. [0057], [0074], e.g., the client device may measure reference signals during a beam management process for channel state feedback). Regarding claim 6, the combined teaching of Ly and Bliss disclose the apparatus of claim 1, wherein the server is a machine learning model trainer collocated with a machine learning model server (see Ly, p. [0061], [0066], e.g., a client device may receive a machine learning component from a server device). Regarding claim 7, the combined teaching of Ly and Bliss disclose the apparatus of claim 1, wherein the ordered list of one or more IDs is arranged in a preferential order based on one or more affinity metrics associated with the one or more IDs (see Bliss, Fig. 1, table 116, p. [0037], e.g., table 116 may appear as a list, table, graphic, text, chart, figure, and/or have any other presentation format suitable for presenting the contents of file 116, and p. [0143], e.g., an ordered list (e.g., table 116) may include a plurality of network IDs assigned to the devices of system 102). Regarding claim 8, the combined teaching of Ly and Bliss disclose the apparatus of claim 1, further comprising: a radio operably coupled to the at least one processor (see Ly, Fig. 3, p. [0072]). Regarding claim 9, Ly discloses an apparatus (e.g., Fig. 10, e.g., an apparatus 1002, p. [0241]), comprising: at least one processor (e.g., 1008) configured to cause a network node to: receive, from a user equipment (UE) (e.g., UE 120), a request for one or more training resources (see Ly, p. [0051], e.g., At base station 110, the uplink signals from UE 120 and other UEs may be received by antennas 234, and p.[0103], e.g., the client device may transmit the update associated with the machine learning component to the server device based at least in part on whether the one or more reporting conditions are satisfied); and transmit, to the UE, the one or more training resources (see Ly, p. [0061],[0065-0066], e.g., the server device sends the neural network model to the client devices); receive, from the UE, an ordered list of one or more model identifiers (IDs) (see Ly, p. [0061],[0065-0066], e.g., Each client device trains the received neural network model using its own data and sends back an updated neural network model to the server device. The server device averages the updated neural network models from the client devices to obtain a new neural network model). However, Ly does not expressly disclose request, from a server, one or more training samples associated with the one or more model IDs; receive, from the server, the one or more training samples associated with the one or more model IDs; select, based at least in part on the one or more training samples, a model ID of the one or more model IDs; transmit, to the UE, a response indicating the model ID; and communicate, using the model ID, with the UE. Bliss discloses the above recited limitations. In particular, Bliss discloses request, from a server (e.g., server 104), one or more training samples associated with the one or more model IDs (see Bliss, p. [0054-0055], [0045-0046], e.g., commissioning service 105 may receive the request transmitted from device 110 (e.g., directly from device 110 and/or via server 104), and commissioning service 105 may identify and/or obtain the network ID assigned to device 110 (e.g., by using the content of the request) (STEP 212A)); receive, from the server, the one or more training samples associated with the one or more model IDs (see Bliss, p. [0039], e.g., commissioning service 105 imports the network identifiers and generates an ordered table comprising the network IDs); select, based at least in part on the one or more training samples, a model ID of the one or more model IDs (see Bliss, p. [0039], [0055], e.g., selection of the export feature may enable exportation of the data comprised by the list of network ID's); transmit, to the UE, a response indicating the model ID (see Bliss, p. [0045-0046], e.g., commissioning service 105 may generate a response that includes the network ID for device 110, and send the response (e.g., via server 104) to device 110); and communicate, using the model ID, with the UE (see Bliss, p. [0045-0046], e.g., device 110 may receive the response from commissioning service 105 (STEP 208B). Subsequent to receiving the response, device 110 may store the network ID locally (STEP 210B)). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bliss’s teachings into Ly. The suggestion/motivation would have been to provide the content of the request to correlate the network ID's of the ordered list to device as suggested by Bliss. Regarding claim 10, the combined teaching of Ly and Bliss disclose the apparatus of claim 9, wherein the one or more training samples include metadata information corresponding to at least one of the following: date of capture; time of capture (see Ly, p. [0081], e.g., the client device 302 may determine an amount of training data collected by the client device 302 during a collection period (e.g., some specified period of time)); location of capture; network identification; cell identification; beam configuration and identification; device model and software version; or an assessment of the UE’s operating environment based on the UE’s local measurements and sensors. Regarding claim 11, the combined teaching of Ly and Bliss disclose the apparatus of claim 9, wherein the one or more model IDs include information corresponding to at least one of: a network vendor identification (see Ly, p. [0066], e.g., a neural network model, parameters corresponding to a neural network model, a set of machine learning models, and/or the like); a UE vendor identification; a public land mobility network (PLMN) ID; a use case ID; or a number of neural networks for one or more use cases. Regarding claim 12, the combined teaching of Ly and Bliss disclose the apparatus of claim 9, wherein the UE and the network node operate respectively as an encoder-decoder pair (see Ly, Fig. 3, p. [0072]). Regarding claim 13, the combined teaching of Ly and Bliss discloses the apparatus of claim 9, wherein the at least one processor is further configured to cause the network node to: associate at least one of a label or hash value with the one or more training resources (see Ly, p.[0085], e.g., The loss function difference may include a difference between a first loss function value associated with the machine learning component and a second loss function value associated with the machine learning component, and p. [0092]) . Regarding claim 14, the combined teaching of Ly and Bliss disclose the apparatus of claim 13, wherein the at least one of a label or a hash value indicates measurement conditions of the one or more training samples (see Ly, p.[0085], e.g., The first loss function value may correspond to an initial instance of the machine learning component, and the second loss function value may correspond to an updated instance of the machine learning component, and p. [0092]). Regarding claim 15, the combined teaching of Ly and Bliss disclose the apparatus of claim 9, wherein the at least one processor is further configured to cause the network node to: request, from the UE, the one or more training samples associated with the one or more model IDs (see Bliss, p.[0043], e.g., commissioning service 105 may receive the request transmitted from device 110 (e.g., directly from device 110 and/or via server 104) (STEP 208A). Commissioning service 105 may identify and/or obtain the network ID assigned to device 110). Regarding claim 16, Ly discloses a network node (see Ly, p. [0038], e.g., UE 120), comprising: at least one processor configured to cause the network node to: receive, from a user equipment (UE), a request for one or more reference signals; transmit, to the UE, the one or more reference signals (see Ly, p. [0046-0047], e.g., a base station transmits reference signals to a user equipment); receive, from the UE, one or more compressed measurement results; transmit, to a first server, the one or more compressed measurement results (see Ly, p. [0058], e.g., a client device (e.g., UE) may use one or more machine learning components (e.g., neural networks) that may be trained to learn dependence of measured qualities on individual parameters, isolate the measured qualities through various layers of the one or more machine learning components, and compress measurements in a way that limits compression loss); receive, from the UE, an ordered list of one or more model IDs (see Ly, p. [0061],[0065-0066], e.g., the server device sends the neural network model to the client devices. Each client device trains the received neural network model using its own data and sends back an updated neural network model to the server device. The server device averages the updated neural network models from the client devices to obtain a new neural network model). However, Ly does not expressly disclose request, request, from a second server, one or more training samples corresponding to the one or more model IDs; receive, from the second server, the one or more training samples; select, based at least in part on the one or more training samples, a model ID from the one or more model IDs; transmit, to the UE, a response indicating the model ID; and communicate, using the model ID, with the UE. Bliss discloses the above recited limitations. In particular, Bliss discloses request, from a second server (e.g., server 104), one or more training samples corresponding to the one or more model IDs (see Bliss, p. [0054-0055], [0045-0046], e.g., commissioning service 105 may receive the request transmitted from device 110 (e.g., directly from device 110 and/or via server 104) and commissioning service 105 may identify and/or obtain the network ID assigned to device 110 (e.g., by using the content of the request) (STEP 212A)); receive, from the second server, the one or more training samples (see Bliss, p. [0039], e.g., commissioning service 105 imports the network identifiers and generates an ordered table comprising the network IDs); select, based at least in part on the one or more training samples, a model ID from the one or more model IDs (see Bliss, p. [0039], [0055], e.g., selection of the export feature may enable exportation of the data comprised by the list of network ID's); transmit, to the UE, a response indicating the model ID (see Bliss, p. [0045-0046], e.g., commissioning service 105 may generate a response that includes the network ID for device 110, and send the response (e.g., via server 104) to device 110); and communicate, using the model ID, with the UE (see Bliss, p. [0045-0046], e.g., device 110 may receive the response from commissioning service 105 (STEP 208B). Subsequent to receiving the response, device 110 may store the network ID locally (STEP 210B)). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bliss’s teachings into Ly. The suggestion/motivation would have been to provide the content of the request to correlate the network ID's of the ordered list to device as suggested by Bliss. Regarding claim 17, the combined teaching of Ly and Bliss disclose the network node of claim 16, wherein the one or more reference signals are channel state information-reference signals (CSI-RS) (see Ly, p. [0074], e.g., a reference signal such as a channel state information reference signal (CSI-RS)). Regarding claim 18, the combined teaching of Ly and Bliss disclose the network node of claim 16, wherein the one or more compressed measurement results include metadata indicating at least one of: a training status of the one or more model IDs (see Bliss, p. [0076-0077], e.g., the client device 302 may collect training data and store it in a memory device 334. The stored training data may be referred to as a “local dataset.”); a functionality, object, input, or output of the one or more model IDs; latency benchmarks, memory requirements, or accuracy of the one or more model IDs; a compression status of the one or more model IDs; inferencing or operating conditions of the one or more model IDs; or pre-processing and post-processing information of the one or more compressed measurement results. Regarding claim 19, the combined teaching of Ly and Bliss disclose the network node of claim 16, wherein the one or more compressed measurement results are based on at least one of: one or more channel state information (CSI) measurements; or one or more beam sweeping measurements (see Ly, p. [0057], [0074], e.g., the client device may measure reference signals during a beam management process for channel state feedback). Regarding claim 20, the combined teaching of Ly and Bliss disclose the network node of claim 16, wherein the ordered list of the one or more model IDs is arranged in a preferential order based on one or more affinity metrics (see Bliss, Fig. 5, p. [0058-0062]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MINH TRANG T NGUYEN whose telephone number is (571)270-5248. The examiner can normally be reached M-F 8:30am-6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chirag C Shah can be reached at 571-272-3144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MINH TRANG T NGUYEN/Primary Examiner, Art Unit 2477
Read full office action

Prosecution Timeline

May 25, 2023
Application Filed
Apr 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587246
METHOD AND DEVICE FOR TRANSMITTING OR RECEIVING CHANNEL STATE INFORMATION IN WIRELESS COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12580638
SATELLITE COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12580633
METHOD AND APPARATUS FOR CONTROLLING POWER OUTPUT FROM ELECTRONIC DEVICE TO EXTERNAL ELECTRONIC DEVICE IN A WIRELESS COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12580621
BREATHING RATE ESTIMATION USING RADIO FREQUENCY (RF) SENSING
2y 5m to grant Granted Mar 17, 2026
Patent 12574094
BEAM SELECTION USING A BEAM FINGERPRINTING DATABASE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
95%
With Interview (+5.3%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 882 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month