Prosecution Insights
Last updated: April 19, 2026
Application No. 18/687,965

METHODS FOR FEDERATED LEARNING OVER WIRELESS (FLOW) IN WIRELESS LOCAL AREA NETWORKS (WLAN)

Final Rejection §103
Filed
Feb 29, 2024
Examiner
NGUYEN, STEVEN C
Art Unit
2451
Tech Center
2400 — Computer Networks
Assignee
InterDigital Patent Holdings, Inc.
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
254 granted / 413 resolved
+3.5% vs TC avg
Strong +51% interview lift
Without
With
+50.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
27 currently pending
Career history
440
Total Applications
across all art units

Statute-Specific Performance

§101
13.8%
-26.2% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
6.7%
-33.3% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 413 resolved cases

Office Action

§103
DETAILED ACTION 1. This action is responsive to the communications filed on 12/11/2025. 2. Claims 21-35 are pending in this application. 3. Claims 21-35 have been amended. 4. Claims 1-20 have been previously cancelled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/11/2025 has been acknowledged and is being considered by the examiner. Response to Arguments Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive. In the remarks, applicant argued that: a. In mapping the claim language of claim 32 to Ren, the Office Action indicates that Ren at paragraph 060-0063 discloses transmitting a support frame indicating participation in the model sharing process. Ren at paragraph 0063 states that "a report of the model update can be transmitted to the relay UE in SL communications". Ren further states at paragraph 0060 that the "model update to be applied to the FL model can be generated, for the FL model and based on local training on the FL model" (emphasis added). It is clear that the model update in Ren occurs before transmission of the model update and therefore the art does not disclose "after transmitting the support frame, updating the first model" as claimed. Even if Ren were to be combined with Li, any update would be before transmission of that update. Additionally, it is respectfully submitted that Ren does not disclose "transmitting, in response to the learning model announcement frame, a support frame indicating participation in the model sharing process". Ren discloses receiving an indication of a FL model at 402 and transmission of a model update at 408 (see Ren at FIG. 4 and paragraphs 0059-0063). However, the transmission of the model update is, at best, based on generation of the model update at 404, rather than the indication of the FL model (e.g., at 402) mapped to the claimed learning model announcement frame (see Office Action at pp. 3-4). It is respectfully requested that the rejections be withdrawn. In response: The examiner respectfully disagrees. The examiner believes that applicant mistakenly referred to claim 32 in the arguments and instead meant to refer to claim 21 as the claim language argued is from claim 21. Ren disclosed that the first step in the process is to receive, from the base station, an indication of an FL model (Figure 4, step 402). The examiner is equating this indication to the claimed “learning model announcement frame” as the indication from the base station indicates a model sharing process in Ren. In the indication from the base station, the base FL model is sent to the UEs so that the UEs can use the base FL model to perform a local model updating process (Ren, Paragraph 59). Then, after receiving the base FL model from the base station, the UE will generate for that FL model, a model update based on the local training the UE has done (Figure 4, step 404). The model updates that are generated by the UEs are collected and applied to the base model (Paragraphs 60-63, Figure 4, step 408). The model update of Ren is equated to the claimed “support frame.” An argument can be made that Ren discloses “after transmitting the support frame, updating the first model…” as Ren will update the base FL (claimed first model) after the model updates (claimed support frame) are collected. However, the examiner has cited Li to more explicitly disclose this limitation. The Li reference was not argued in applicant’s remarks. It appears that the applicant is putting significant weight on the terms “learning model announcement frame” and “support frame.” The examiner suggests applicant amend the limitations to include exactly what they entail in order to possibly overcome the art of record. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 21-24, 26-32, 34, 35, are rejected under 35 U.S.C. 103 as being unpatentable over Ren et al (US 2024/0256898) in view of Li et al. (US 2022/0038349). Regarding claim 21, Ren disclosed: A method performed by a station (Figure 2, UE 104), the method comprising: receiving, via an access point (AP) (Paragraph 41, access point) associated with a basic service set (BSS) (Paragraph 41, basic service set) in a wireless local area network (WLAN) (Paragraph 36, WiFi), a learning model announcement frame (Paragraph 59, indication of FL model) indicating a model sharing process (Paragraph 41, the UE may be referred to as station. Base station that is connected to UE includes a basic service set along with an access point. Paragraph 59, an indication of an FL model is received from a base station. The base FL is transmitted to various UEs so that the various UEs can perform a local model updating process (i.e., model sharing process)); transmitting (Paragraph 63, transmitting), in response to the learning model announcement frame, a support frame (Paragraph 60, model update) indicating participation in the model sharing process (Paragraph 60, local training to update the model), the support frame comprising a set of parameters (Paragraph 61, parameters) associated with a first model (Paragraph 60, Figure 4, a model update to be applied to the FL model is generated based on local training on the FL model after receiving the indication of the FL model from the base station. The local model updating component 242 (of the UE) generates the model update to be applied to the FL model. The updates from the various UEs are collected and applied to the base model. Paragraph 63, a report of the model update is transmitted). While Ren disclosed sending a model update to be applied to the FL model (see above), Ren did not explicitly disclose after transmitting the support frame, updating the first model based at least upon the participation in the sharing model process. However, in an analogous art, Li disclosed after transmitting the support frame, updating the first model based at least upon the participation in the sharing model process (Paragraph 58, the UEs act as the local node which are responsible for sending a training model request (i.e., support frame), receiving a trained model, and training the model locally with its own data. Paragraph 61, all UEs can perform local training and update the model parameter (i.e., participation)). One of ordinary skill in the art would have been motivated to combine the teachings of Ren with Li because the references involve federated learning, and as such, are within the same environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the model updating of Li with the teachings of Ren in order to enlarge the training data to take advantage of various updated parameters (Li, Paragraph 58). Regarding claim 28, the claim is substantially similar to claim 21. Claim 28 recites a transceiver and a processor (Ren, Figure 2, transceiver 202 and processor 212). Therefore, the claim is rejected under the same rationale. Regarding claims 22, 29, the limitations of claims 21, 28, have been addressed. Ren and Li disclosed: wherein the first model comprises at least one of a federated learning (FL) model or a machine learning (ML) model (Ren, Paragraph 59, the FL model can be a ML model). Regarding claims 23, 30, the limitations of claims 21, 28, have been addressed. Ren and Li disclosed: further comprising: receiving a second support frame from a second station, the second support frame comprising a second set of parameters associated with a second model (Ren, Paragraph 75, Figure 7, showing multiple UEs that relay model updates to the relay UE); and updating the first model based on the second set of parameters (Ren, Paragraph 75, converging the local model updates into a relay model update sent to the base station). Regarding claims 24, 31, the limitations of claims 21, 28, have been addressed. Ren and Li disclosed: wherein the model announcement frame comprises announcement parameters, the announcement parameters comprising at least one of a model identifier (ID) (Ren, Paragraph 65, model identifier), a number of model layers, or a number of weights per layer, the method further comprising configuring the station to update the first model in accordance with the announcement parameters (Ren, Paragraph 65, the model update includes reports with model parameters and corresponding values. Paragraph 66, a converged model update is generated based on the parameters within the model update). Regarding claims 26, 34, the limitations of claims 21, 28, have been addressed. Ren and Li disclosed: further comprising: training the first model; and transmitting results of the training of the first model (Ren, Paragraph 60, based on the local training, a model update to be applied to the FL model is generated for the FL model. Paragraph 71, the model update is transmitted to an upstream node). Regarding claims 27, 35, the limitations of claims 21, 28, have been addressed. Ren and Li disclosed: further comprising: receiving a message comprising a target wake time (TWT); and configuring the station to receive training parameters for the first model during the TWT (Ren, Paragraph 65, receiving the update reports from multiple UEs at a similar time. This is based on a time (i.e., TWT) or trigger detected by the UEs). Regarding claim 32, the limitations of claim 28 have been addressed. Ren and Li disclosed: wherein the announcement frame comprises at least one of an uplink (UL) schedule or a downlink (DL) schedule, the processor further configured to configure the station to transmit and receive in accordance with at least one of the UL schedule or the DL schedule (Ren, Paragraph 22, federated learning includes downlink signaling to scheduled users. Paragraph 23, UEs are scheduled for FL in downlink (i.e., transmit/receive in accordance with schedule) but some low tier devices may have uplink coverage constraints). Claims 25, 33, are rejected under 35 U.S.C. 103 as being unpatentable over Ren et al (US 2024/0256898) in view of Li et al. (US 2022/0038349) and Pezeshki et al. (US 2022/0182802). Regarding claims 25, 33, the limitations of claims 21, 28, have been addressed. Ren and Li did not explicitly disclose: further comprising: receiving a gradient update for the first model; and configuring the station to update the first model in accordance with the received gradient update. However, in an analogous art, Pezeshki disclosed receiving a gradient update for the first model; and configuring the station to update the first model in accordance with the received gradient update (Paragraph 26, client device generating a local update associated with the machine learning component based on the local training operation. The local update includes a set of gradients associated with a loss function corresponding to the locally updated machine learning component. Paragraph 27, in federated learning, the client devices then provides the update to the server device and applies the aggregated updated to the machine learning component). One of ordinary skill in the art would have been motivated to combine the teachings of Ren and Li with Pezeshki because the references involve federated learning, and as such, are within the same environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the gradient of Pezeshki with the teachings of Ren and Li in order to optimize the model parameters (Pezeshki, Paragraph 67). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Steven C. Nguyen whose telephone number is (571)270-5663. The examiner can normally be reached M-F 7AM - 3PM and alternatively, through e-mail at Steven.Nguyen2@USPTO.gov. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry can be reached at 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.C.N/Examiner, Art Unit 2451 /Chris Parry/Supervisory Patent Examiner, Art Unit 2451
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
Sep 05, 2025
Non-Final Rejection — §103
Dec 11, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592855
Network Intent Orchestration in Enterprise Fabrics
2y 5m to grant Granted Mar 31, 2026
Patent 12580863
SYSTEMS AND METHODS FOR PROVIDING ANALYTICS FROM A NETWORK DATA ANALYTICS FUNCTION BASED ON NETWORK POLICIES
2y 5m to grant Granted Mar 17, 2026
Patent 12580872
DYNAMIC QOS CHANGES
2y 5m to grant Granted Mar 17, 2026
Patent 12537749
LEARNING-BASED NETWORK OPTIMIZATION SERVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12531931
SYSTEMS AND METHODS FOR CREATING A VIRTUAL KVM SESSION BETWEEN A CLIENT DEVICE AND A TARGET DEVICE
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+50.6%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 413 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month