Prosecution Insights
Last updated: April 19, 2026
Application No. 18/683,231

UE CLUSTERING IN FL MODEL UPDATE REPORTING

Final Rejection §103
Filed
Feb 12, 2024
Examiner
RASHID, ISHRAT
Art Unit
2459
Tech Center
2400 — Computer Networks
Assignee
Qualcomm Incorporated
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
78%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
115 granted / 198 resolved
At TC average
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
22 currently pending
Career history
220
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
53.5%
+13.5% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 198 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to the Remarks and Amendments filed on 22 December, 2025. Claims 1-9, 30-44, and 49-52 are pending. Claims 1-3, 30-32, and 40 are amended. Claims 10-29 and 45-48 are canceled. Claims 49-52 are new. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-9, 30-31, 33-41, 43-44, 49 and 51 are rejected under 35 U.S.C. 103 as being unpatentable over Gil Ramos et al (US 2020/0394465), in view of “SERIES Y: GLOBAL INFORMATION INFRASTRUCTURE, INTERNET PROTOCOL ASPECTS, NEXT-GENERATION NETWORKS, INTERNET OF THINGS AND SMART CITIES”, dated 10/2019, hereinafter, NPL. Regarding claim 1, Gil Ramos teaches an apparatus for wireless communication at a user equipment (UE), comprising: [[a]] memory (Gil Ramos fig.1); and one or more processors coupled to the memory and, the memory (Gil Ramos fig.1) configured to cause the UE to: receive, from a network node, one or more criteria to group a plurality of UEs into a UE group for a combined machine learning model update (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22); receive, from one or more UEs in the UE group, individual machine learning model updates for a machine learning model update (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22); and transmit the combined machine learning model update to the network node based on the individual machine learning model updates from the one or more UEs (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22). Gil Ramos teaches the above but Gil Ramos does not explicitly teach wherein the one or more criteria include a scenario identifier (ID) or a cell ID; form the UE group with one or more UEs based on the one or more criteria. However, in a similar field of endeavor, NPL teaches wherein the one or more criteria include a scenario identifier (ID) or a cell ID (NPL section 6.4.1.2.1 provides “It is critical that ML-enabled networks collect position estimates (e.g., GNSS data, cell identifier, tracking area identifier) for mobility pattern prediction…Examples of context information data include UE identifier, various logs, KPIs and environmental information (e.g., maps). For the network state data, various identification information at the AN (e.g., cell identifier, beam identifier…)”; section 6.4.2.2.1 provides “It is critical that ML-enabled networks support load balancing schemes which consider automatically the current number of UEs and the traffic speed of the UEs, and schedules UEs among different cells for a guaranteed QoE. Expected requirements It is expected that ML-enabled networks support the collection of the following measurement data for load balancing and cell split/merge schemes: • number of UEs in the current cell and the neighbouring cells”); form the UE group with one or more UEs based on the one or more criteria (NPL section 6.4.2.2.1 provides “It is critical that ML-enabled networks support load balancing schemes which consider automatically the current number of UEs and the traffic speed of the UEs, and schedules UEs among different cells for a guaranteed QoE. Expected requirements It is expected that ML-enabled networks support the collection of the following measurement data for load balancing and cell split/merge schemes: • number of UEs in the current cell and the neighbouring cells”). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of NPL for cell identifier. The teachings of NPL, when implemented in the Gil Ramos system, will allow one of ordinary skill in the art to utilize as a common criteria of same cell location to group devices. Therefore, the examiner concludes it would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s invention to arrive at the above-claimed invention. Regarding claim 2, the apparatus of claim 1, wherein the one or more criteria further identify at least one UE for the UE group (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22). Regarding claim 4, the apparatus of claim 1, wherein the one or more processors are further configured to cause the UE to: receive a configuration to collect the individual machine learning model updates of the plurality of UEs in the UE group over sidelink and to transmit the combined machine learning model update to the network node (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22). Regarding claim 5, the apparatus of claim 4, wherein the memory and the at least one or more processors are further configured to cause the UE to: form the UE group with the one or more UEs over the sidelink based on the configuration and the one or more criteria to group the plurality of UEs (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22). Regarding claim 6, the apparatus of claim 1, wherein the one or more processors are further configured to cause the UE to: perform model merging of the individual machine learning model updates from the one or more UEs to extract one or more features to report in the combined machine learning model update (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22). Regarding claim 7, the apparatus of claim 6, wherein the combined machine learning model update is based on an average between the individual machine learning model updates (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22). Regarding claim 8, the apparatus of claim 6, wherein the combined machine learning model update is based on a similarity analysis between the individual machine learning model updates (Gil Ramos [0033] provides “The analytics provided to individual users may therefore take into account more than one set of raw data, which offers improvement. The analytics may be personalized to individual users, and may also provide aggregated analytics for the common household or group”). Regarding claim 9, the apparatus of claim 1, wherein the one or more processors are further configured to cause the UE to: receive an updated model for the machine learning model from the network node after transmission of transmitting the combined machine learning model update to the network node Regarding claim 30, this claim contains limitations found within those of claim 1, and the same rationale of rejection applies, where applicable. Regarding claim 31, this claim contains limitations found within those of claim 2, and the same rationale of rejection applies, where applicable. Regarding claim 33, this claim contains limitations found within those of claim 4, and the same rationale of rejection applies, where applicable. Regarding claim 34, this claim contains limitations found within those of claim 5, and the same rationale of rejection applies, where applicable. Regarding claim 35, this claim contains limitations found within those of claim 6, and the same rationale of rejection applies, where applicable. Regarding claim 36, this claim contains limitations found within those of claim 7, and the same rationale of rejection applies, where applicable. Regarding claim 37, this claim contains limitations found within those of claim 8, and the same rationale of rejection applies, where applicable. Regarding claim 38, this claim contains limitations found within those of claim 9, and the same rationale of rejection applies, where applicable. Regarding claim 39, The apparatus of claim 1, further comprising: one or more antennas coupled to the one or more processors, wherein the one or more processors are configured to receive the one or more criteria, receive the individual machine learning model updates for the machine learning model, and transmit the combined machine learning model update via the one or more antennas (Gil Ramos [0030] provides “Such devices tend to have in-built wireless transceivers for transmitting and receiving data using protocols such as WiFi, WiMAX and Bluetooth and any other communication method”; [0044] provides “User devices 34, 36, 37, 38, 40 in use transmit users' raw data to the first hub 10. The raw data may be sent wirelessly over any communication network”). Regarding claim 40, this claim contains limitations found within those of claim 1, and the same rationale of rejection applies, where applicable. Regarding claim 41, this claim contains limitations found within those of claim 2, and the same rationale of rejection applies, where applicable. Regarding claim 43, this claim contains limitations found within those of claim 4, and the same rationale of rejection applies, where applicable. Regarding claim 44, this claim contains limitations found within those of claim 5, and the same rationale of rejection applies, where applicable. Regarding claim 49, the apparatus of claim 1, wherein the one or more criteria received from the network node to group the plurality of UEs into the UE group includes the cell ID (NPL section 6.4.2.2.1 provides “It is critical that ML-enabled networks support load balancing schemes which consider automatically the current number of UEs and the traffic speed of the UEs, and schedules UEs among different cells for a guaranteed QoE. Expected requirements It is expected that ML-enabled networks support the collection of the following measurement data for load balancing and cell split/merge schemes: • number of UEs in the current cell and the neighbouring cells”). Motivation provided with reference to claim 1. Regarding claim 51, this claim contains limitations found within those of claim 49, and the same rationale of rejection applies, where applicable. Claims 3, 32 and 42 are rejected under 35 U.S.C. 103 as being unpatentable over Gil Ramos et al (US 2020/0394465), in view of “SERIES Y: GLOBAL INFORMATION INFRASTRUCTURE, INTERNET PROTOCOL ASPECTS, NEXT-GENERATION NETWORKS, INTERNET OF THINGS AND SMART CITIES”, dated 10/2019, hereinafter, NPL. Regarding claim 3, Gil Ramos-NPL has taught the apparatus of claim 1, but Gil Ramos-NPL does not explicitly teach wherein the one or more criteria further indicate a distance from the UE. However, in a similar field of endeavor, Haustein teaches wherein the one or more criteria further indicate a distance from the UE (Haustein [1159] provides “Such an optimisation criterion may be, e.g., a local minimum for each node, that each node perceives interference each below a threshold that may be device individual, group-individual (e.g., groups of different types of devices—e.g., IoT, UE, . . . and/or of different distances to the reporting device, e.g., assuming that a greater distance to the reporting victim needs a lower reduction of interference or the like) or valid for all devices”). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Haustein for distance as a criterion for grouping user devices. The teachings of Haustein, when implemented in the Gil Ramos-NPL system, will allow one of ordinary skill in the art to reduce communication overhead. Therefore, the examiner concludes it would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s invention to arrive at the above-claimed invention. Regarding claim 32, this claim contains limitations found within those of claim 3, and the same rationale of rejection applies, where applicable. Regarding claim 42, this claim contains limitations found within those of claim 3, and the same rationale of rejection applies, where applicable. Claims 50 and 52 are rejected under 35 U.S.C. 103 as being unpatentable over Gil Ramos et al (US 2020/0394465), in view of “SERIES Y: GLOBAL INFORMATION INFRASTRUCTURE, INTERNET PROTOCOL ASPECTS, NEXT-GENERATION NETWORKS, INTERNET OF THINGS AND SMART CITIES”, dated 10/2019, hereinafter, NPL, further in view of Tapia (US 2018/0270126). Regarding claim 50, Gil Ramos-NPL has taught the apparatus of claim 1 including wherein the one or more criteria is received from the network node to group the plurality of UEs (Gil Ramos fig.1 and [0038-0061] provides user devices 34, 36, 37, 38, 40 and first and second users 30, 32. The system 1 also comprises a remote, external server 16. A first hub 10 is configured to collect raw data from the one or more user devices 34, 36, 37, 38, 40. Other hubs 20, 22 may be in respective different locations from the first hub and associated with other groups of users. The user devices 34, 36, 37, 38, 40 may belong to members of a common household, such as family members, they may register with the first hub 10. The first hub 10 may issue an identifying beacon, similar to a SSID. In some embodiments, one or more of the first or second users 30, 32 may opt out of sending one or more selected data sets, such as weight, and allow the other data sets to be sent. This selection may be performed using an application or website configuration. The first hub 10 may comprise one or more local learned models that use the received raw data to generate local parameters that are used to update one or more local learned models. The one or more local learned models may also use the users' raw data to provide personalized analytics to each of the user devices 34, 36, 37, 38, 40 and/or to update one or more local models stored on said user devices so that said user devices can provide the personalized analytics. The first hub 10 may generate global parameters using the one or more local learned models. The global parameters may be generated by using the combined data set. The server 16 may receive global parameters from the plurality of other hubs 20, 22. The server 16 includes one or more global learned models. The server 16 may update the one or more global learned models using the received global parameters from the hubs 10, 20, 22), but Gil-Ramos-NPL does not explicitly teach wherein the one or more criteria to group the plurality of UEs into the UE group includes the scenario ID. However, in a similar field of endeavor, Tapia teaches wherein the one or more criteria to group the plurality of UEs into the UE group includes the scenario ID (Tapia [0039] provides “…the parameters for selecting a specific group of user device that is the subject of a QoE assessment query may specify user devices that are associated with or uses a particular base station, a particular air interface, a particular network cell, a particular service, a particular application, a particular third-party service provider, a particular router, a particular gateway, a particular core network component, a particular day and/or time, and/or so forth. Additionally, the parameters may further specify user devices that are associated with similarly situated network cells, similar weather condition/environment, same roaming condition, same service plan or service account, same usage scenario (e.g., a particular combination of applications), and/or so forth. Accordingly, the query module 224 may select different groups of user devices for analysis based on the QoE assessment queries.”). One of ordinary skill in the art before the effective filing date of the claimed invention would have recognized the ability to utilize the teachings of Tapai for scenario ID as a criterion for grouping user devices. The teachings of Tapai, when implemented in the Gil Ramos-NPL system, will allow one of ordinary skill in the art to cluster similar contexts to provide specialized global models for each scenario. Therefore, the examiner concludes it would have been obvious to one of ordinary skill in the art before the effective filing date of applicant’s invention to arrive at the above-claimed invention. Regarding claim 52, this claim contains limitations found within those of claim 50, and the same rationale of rejection applies, where applicable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Kumar et al US 2022/0182263. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHRAT RASHID whose telephone number is (571)272-5372. The examiner can normally be reached 10AM-6PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tonia L Dollinger can be reached at 571-272-4170. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.R/Examiner, Art Unit 2459 /TONIA L DOLLINGER/Supervisory Patent Examiner, Art Unit 2459
Read full office action

Prosecution Timeline

Feb 12, 2024
Application Filed
Sep 30, 2025
Non-Final Rejection — §103
Dec 22, 2025
Response Filed
Mar 26, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603930
CONTENT DELIVERY
2y 5m to grant Granted Apr 14, 2026
Patent 12598109
NETWORK PERFORMANCE EVALUATION USING AI-BASED NETWORK CLONING
2y 5m to grant Granted Apr 07, 2026
Patent 12587586
REDUCING LATENCY AND OPTIMIZING PROXY NETWORKS
2y 5m to grant Granted Mar 24, 2026
Patent 12587593
DATA TRANSMISSION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12562993
PACKET FRAGMENTATION PREVENTION IN AN SDWAN ROUTER
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
78%
With Interview (+19.9%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 198 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month