Prosecution Insights
Last updated: April 19, 2026
Application No. 18/864,477

HYBRID TRANSMISSION FOR FEDERATED LEARNING

Non-Final OA §102
Filed
Nov 08, 2024
Examiner
GRIJALVA LOBOS, BORIS D
Art Unit
2446
Tech Center
2400 — Computer Networks
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
316 granted / 383 resolved
+24.5% vs TC avg
Strong +22% interview lift
Without
With
+22.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
404
Total Applications
across all art units

Statute-Specific Performance

§101
12.5%
-27.5% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
18.5%
-21.5% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 383 resolved cases

Office Action

§102
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office action is in response to communications filed on 11/8/2024. Claims 1-30 are pending. DETAILED ACTION Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 and 10 is/are rejected under 35 U.S.C. 102(A)(2) as being anticipated by Moosavi et al. (US 20250175393 A1, hereinafter Moosavi). Regarding claim 1, Moosavi discloses a method of wireless communication at a user equipment (UE) (¶[0044], "Each of the user equipment 170a:170c comprises, is collocated with, or integrated with, a respective agent entity"), comprising: receiving one or more configurations indicating transmission of gradient information using mapping information (¶[0049], "The server entity 200 broadcasts the current parameter vector of the learning model, θ(i), to the agent entities 300a, 300b"; ¶[0050], "Each agent entity 300a, 300b performs a local optimization of the model by running T steps of a stochastic gradient descent update on θ(i)"; ¶[0065], "The server entity 200 configures the agent entities 300b, 300c to, as part of performing the iterative learning process:" [0066] "use over-the-air transmission with direct analog modulation for communicating local updates of the iterative learning process to the cluster head 120a:120c"; ¶[0067], "The server entity 200 configures the cluster head 120a:120c of each cluster 110a:110c to, as part of performing the iterative learning process:" ¶[0068] "aggregate the local updates received from the agent entities 300b, 300c within its cluster 110a:110c, and" ¶[0069] "use unicast digital transmission for communicating aggregated local updates to the server entity 200") and a transmission type, wherein the transmission type indicates an analog transmission or a digital transmission of the gradient information (¶[0013], "The agent entity acts as a cluster head of a cluster of agent entities. The method comprises receiving configuration from the server entity. According to the configuration, the agent entity is to, as part of performing the iterative learning process, aggregate local updates received in over-the-air transmission with direct analog modulation from the agent entities within the cluster and to use unicast digital transmission for communicating the aggregated local updates to the server entity"; ¶[0066], "use over-the-air transmission with direct analog modulation for communicating local updates"; ¶[0067], "use unicast digital transmission for communicating aggregated local updates to the server entity 200") and the gradient information is associated with training a global machine learning model (¶[0002], "multiple (possible very large number of) agents, for example implemented in user equipment, participate in training a shared global learning model by exchanging model updates with a centralized parameter server"); generating a signal including quantized gradient information based on the mapping information and a set of gradients (¶[0051], "Each agent entity 300a, 300b transmits to the server entity 200 their model update […] where θk (i, 0) is the model that agent entity k received from the server entity" - generating inherent), wherein the set of gradients is associated with the training of the global machine learning model with local training data at the UE (¶[0002], "multiple (possible very large number of) agents, for example implemented in user equipment, participate in training a shared global learning model by exchanging model updates with a centralized parameter server"); and transmitting the signal to a network entity via the analog transmission or the digital transmission based on the transmission type (¶[0051], "Each agent entity 300a, 300b transmits to the server entity 200 their model update […] where θk (i, 0) is the model that agent entity k received from the server entity"). Regarding claim 10, Moosavi discloses an apparatus for wireless communication at a user equipment (UE), comprising: a memory; and a processor coupled with the memory (¶[0147]-[0148]). The remaining limitations of claim 10 are similar in scope to those of claim 1. Therefore, claim 10 is rejected for the same reasons as set forth in the rejection of claim 1, above. Claim(s) 21-22 is/are rejected under 35 U.S.C. 102(A)(2) as being anticipated by Pezeshki et al. (US 20220124779 A1, hereinafter Pezeshki). Regarding claim 21, Pezeshki discloses a method of wireless communication at a network entity (¶[0072], "base station"), comprising: generating a first set of configurations for a first set of user equipments (UEs) indicating digital transmission of first gradient information using first mapping information (¶[0072], "base station 410 may transmit, and the UE 405 may receive, a federated learning configuration. The federated learning configuration may include an indication of a periodic communication scheme for communicating with the base station 410 to facilitate federated learning associated with a machine learning component" - generating inherent; ¶[0073], "the periodic communication scheme may include an SPS configuration for downloading global updates associated with the machine learning component from the base station 410 and/or a configured grant configuration for uploading, to the base station 410, local updates associated with the machine learning component. For example, an SPS configuration may allocate periodic resources intended for transmissions of transport blocks carrying global updates. The periodic resources may include time domain resources, frequency domain resources, and/or spatial domain resources, among other resources. Dynamic scheduling may be used to allocate resources for any re-transmissions"; ¶[0074], "the configured grant configuration may configure digital transmissions of gradient vectors from UEs to the base station"); generating a second set of configurations for a second set of UEs indicating analog transmission of second gradient information using second mapping information (¶[0072], "base station 410 may transmit, and the UE 405 may receive, a federated learning configuration. The federated learning configuration may include an indication of a periodic communication scheme for communicating with the base station 410 to facilitate federated learning associated with a machine learning component" - generating inherent; ¶[0073], "the periodic communication scheme may include an SPS configuration for downloading global updates associated with the machine learning component from the base station 410 and/or a configured grant configuration for uploading, to the base station 410, local updates associated with the machine learning component. For example, an SPS configuration may allocate periodic resources intended for transmissions of transport blocks carrying global updates. The periodic resources may include time domain resources, frequency domain resources, and/or spatial domain resources, among other resources. Dynamic scheduling may be used to allocate resources for any re-transmissions"; ¶[0074], "the configured grant configuration may configure analog over-the-air aggregation of gradient vectors. In this case, the base station may configure each UE with the same resources"; ¶[0075], "As an example, a first set of resources (e.g., a first set of symbols and/or slots) may be allocated for transmitting a first global update, a second set of resources may be allocated for transmitting a second global update, a third set of resources may be allocated for transmitting a third global update, and so on. Each set of resources may occur in accordance with a periodicity of the SPS configuration"); and transmitting the first set of configurations to the first set of UEs and the second set of configurations to the second set of UEs (¶[0072], "base station 410 may transmit, and the UE 405 may receive, a federated learning configuration"). Regarding claim 22, Pezeshki discloses an apparatus for wireless communication at a network entity, comprising: a memory; and a processor coupled with the memory (¶[0011], "a non-transitory computer-readable medium storing a set of instructions for wireless communication includes one or more instructions that, when executed by one or more processors of a base station") and configured to: generate a first set of configurations for a first set of user equipments (UEs) indicating digital transmission of first gradient information using first mapping information (¶[0072], "base station 410 may transmit, and the UE 405 may receive, a federated learning configuration. The federated learning configuration may include an indication of a periodic communication scheme for communicating with the base station 410 to facilitate federated learning associated with a machine learning component" - generating inherent; ¶[0073], "the periodic communication scheme may include an SPS configuration for downloading global updates associated with the machine learning component from the base station 410 and/or a configured grant configuration for uploading, to the base station 410, local updates associated with the machine learning component. For example, an SPS configuration may allocate periodic resources intended for transmissions of transport blocks carrying global updates. The periodic resources may include time domain resources, frequency domain resources, and/or spatial domain resources, among other resources. Dynamic scheduling may be used to allocate resources for any re-transmissions"; ¶[0074], "the configured grant configuration may configure digital transmissions of gradient vectors from UEs to the base station"); generate a second set of configurations for a second set of UEs indicating analog transmission of second gradient information using second mapping information (¶[0072], "base station 410 may transmit, and the UE 405 may receive, a federated learning configuration. The federated learning configuration may include an indication of a periodic communication scheme for communicating with the base station 410 to facilitate federated learning associated with a machine learning component" - generating inherent; ¶[0073], "the periodic communication scheme may include an SPS configuration for downloading global updates associated with the machine learning component from the base station 410 and/or a configured grant configuration for uploading, to the base station 410, local updates associated with the machine learning component. For example, an SPS configuration may allocate periodic resources intended for transmissions of transport blocks carrying global updates. The periodic resources may include time domain resources, frequency domain resources, and/or spatial domain resources, among other resources. Dynamic scheduling may be used to allocate resources for any re-transmissions"; ¶[0074], "the configured grant configuration may configure analog over-the-air aggregation of gradient vectors. In this case, the base station may configure each UE with the same resources"; ¶[0075], "As an example, a first set of resources (e.g., a first set of symbols and/or slots) may be allocated for transmitting a first global update, a second set of resources may be allocated for transmitting a second global update, a third set of resources may be allocated for transmitting a third global update, and so on. Each set of resources may occur in accordance with a periodicity of the SPS configuration"); and transmit the first set of configurations to the first set of UEs and the second set of configurations to the second set of UEs (¶[0072], "base station 410 may transmit, and the UE 405 may receive, a federated learning configuration"). Allowable Subject Matter Claims 2-9, 11-20, and 23-30 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20230422308 A1, which discloses "a UE receives configuration information from a base station (S3910). Here, the configuration information may inform a modulation method and an encoding method applied to the UE. Here, the modulation method and the encoding method may be the same for a plurality of UEs including the UE" (¶[0280]) and "Subsequently, the UE performs encoding and modulation on the data based on the configuration information (S3920). Here, for example, the encoding may be a Q-ary linear encoding. Also, here, a codebook used for the encoding may be pre-configured/pre-defined or identical for a plurality of UEs including the UE" (¶[0281]); claim 20, "the encoding method informs a codebook used by the terminal." Any inquiry concerning this communication or earlier communications from the examiner should be directed to BORIS D GRIJALVA LOBOS whose telephone number is (571)272-0767. The examiner can normally be reached M-F 10:30AM to 6:30PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Gillis can be reached at 571-272-7952. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BORIS D GRIJALVA LOBOS/ Primary Patent Examiner, Art Unit 2446
Read full office action

Prosecution Timeline

Nov 08, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598171
SYSTEMS AND METHODS FOR FAST AND SIMULTANEOUS MULTI-FACTOR AUTHENTICATION USING VECTOR COMPUTING
2y 5m to grant Granted Apr 07, 2026
Patent 12591459
MANAGING USE OF HARDWARE BUNDLES USING CONTROL MECHANISMS
2y 5m to grant Granted Mar 31, 2026
Patent 12591659
METHOD FOR IDENTIFYING POTENTIAL DATA EXFILTRATION ATTACKS IN AT LEAST ONE SOFTWARE PACKAGE
2y 5m to grant Granted Mar 31, 2026
Patent 12591651
DATA STORAGE DEVICE AND METHOD OF ACCESS WITH USER FINGERPRINT
2y 5m to grant Granted Mar 31, 2026
Patent 12574261
TAMPER-PROOF BATCH RECORDS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+22.2%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 383 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month