Prosecution Insights
Last updated: April 19, 2026
Application No. 17/822,488

SMART COMMUNICATION IN FEDERATED LEARNING FOR TRANSIENT AND RESOURCE-CONSTRAINED MOBILE EDGE DEVICES

Final Rejection §103
Filed
Aug 26, 2022
Examiner
STANLEY, JEREMY L
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
131 granted / 276 resolved
-7.5% vs TC avg
Strong +45% interview lift
Without
With
+44.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
28 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 276 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Amendment filed on October 29, 2025. Claims 1 and 11 are amended. Claims 7 and 17 are cancelled. Claims 1-6, 8-16, and 18-20 are pending in the case. Claims 1 and 11 are the independent claims. This action is final. Applicant’s Response In the Amendment filed on October 29, 2025, Applicant amended the claims and provided arguments in response to the rejections of the claims under 35 USC 101 and 103 in the previous office action. Response to Argument/Amendment Applicant’s amendments to the claims in response to the rejection of the claims under 35 USC 101 are acknowledged, and Applicant’s associated arguments have been fully considered. Applicant argues that the claims are not directed to abstract matter because it is not possible for a human to implement, as a mental process, communications such as transmittal/receipt of gradients between central and edge nodes, and it is unclear how a human mind could host and operate a machine learning model and generate gradients based on those operations. In addition, Applicant argues that the claims are integrated into a practical application because they embrace processes in which efficient use is made of edge resources in edge nodes that are resource-constrained because the quantization scheme takes into account the resources available at the edge nodes where the gradient vectors of the ML model are generated, such that the recited embodiments strike a balance between an acceptable level of ML model convergence, which is a function of the size of a gradient vector, and available network resources for generating and transmitting the gradients. Examiner notes that Applicant’s arguments appear to indicate that the claims are directed to a practical application and/or improvement in the functioning of in a computer or technical field (such as improvements in federated learning which achieve acceptable convergence of an ML model in a federated learning environment while taking into account and balancing the usage of available resources for performing training of the ML model, as discussed in at least paragraphs 0011, 0013, 0017-0018, 0028, and 0054 of the specification). Therefore, Applicant’s arguments are persuasive and the 101 rejection is withdrawn. Applicant’s amendments to the claims in response to the rejection of the claims under 35 USC 103 are acknowledged, and Applicant’s associated arguments have been fully considered. Applicant argues that the cited references fail to disclose the newly added limitations in the amended independent claims. For example, Applicant argues that “while Choi refers to a capability of a worker to support a quantization level, Choi fails to disclose that ‘…respective gradient vectors have been quantized, by the edge nodes, based on computing resources available at the edge nodes’….Choi fails to disclose that quantization is performed based on both [1] computing resources available at edge nodes and [2] an acceptable convergence of the ML model, as claimed.” However, Choi clearly teaches: receiving, by the central node from each of the edge nodes, a respective gradient vector created by the edge node based on operations of a respective instance of an ML (machine learning) model at the edge node, wherein each gradient vector has been quantized according to the quantization level (e.g. paragraph 0087, Fig. 2, each worker compressing gradient data according to the indicated quantization level, and transmitting the compressed gradient data to the server; paragraphs 0095-0097, worker calculating gradient data using local model, compressing the gradient data according to the first quantization level, and transmitting the compressed gradient data to the server, including an indication of the quantization level used to compress the gradient data), and the respective gradient vectors have been quantized, by the edge nodes based on computing resources available at the edge nodes (e.g. paragraph 0046, time it takes for worker to transmit quantized or compressed gradient data based on conditions of channel between worker and server; paragraph 0048, server selecting and indicating quantization level for each iteration in training the global model; if conditions change, server selecting quantization level such that worker can transmit compressed gradient within some finite time, such as selecting a lower quantization when channel conditions worsen, resulting in lower data rates; paragraph 0085, 0091, selecting quantization level based on capability message from worker indicating supported quantization levels; paragraph 0102, Fig. 3, selecting lower quantization level such that worker may transmit compressed gradient data within finite time; paragraph 0121, reducing processing resources and power consumption associated with federated learning procedures; transmitting compressed gradient data based on indicated quantization level reduces time spend and quantity of resources used to transmit the gradient data; i.e. the quantization level is selected based at least in part on available link budget, bandwidth, quality, and supported quantization levels at the worker/edge nodes; Examiner notes that Applicant’s remarks filed October 29, 2025 appear to support the interpretation that resources available at the edge nodes include available network resources (see page 8, second full paragraph of Applicant’s remarks, “takes into account the resources available at the edge nodes….striking a balance between and acceptable level of ML model convergence…and available network resources for generating and transmitting the gradients)) and based on an acceptable convergence of the ML model (e.g. paragraphs 0003, 0047, 0049, 0077, 0084, 0088, 0144, ensuring global convergence of machine learning model; use of quantization levels for gradient data output by local models to reduce latency and ensure global convergence; Examiner notes that the claims recite both an ML model generally, and an instance of the ML model specifically at the edge node, but do not appear to require that the acceptable convergence of the ML model be specifically limited to a convergence of the instance of the ML model at the edge node, as opposed to convergence of the ML model in general, such as convergence of the global ML model which the instance is based upon; this interpretation appears to be supported by the specification of the instant application, such as in paragraph 0013, which indicates that “acceptable level of model convergence…may be a function of the size of a gradient vector, and available network resources for transmitting the gradients, edge node resources…”, paragraph 0020, describing convergence of the single central model, and paragraph 0045, which describes improvements in “convergence of the model whose gradients are being sent from the edge nodes to the central node” which collectively appear to indicate that the relevant convergence is of the global/central model at the central node which uses the transmitted gradients, and not of any particular local model at an edge node). As can be seen from the cited portions of Choi above, Choi clearly teaches that quantization is performed based on computing resources available at edge nodes, such as the fact that the edge node supports a given quantization level (indicative of a computing capability/resource which is available at the edge node), and network resources available at the edge nodes (such as link budget, channel bandwidth, channel quality, etc.). In addition, Choi clearly teaches that the quantization is also performed based on the goal of obtaining a global convergence of the ML model, instead of a local convergence, which is analogous to an acceptable convergence of the ML model. Other than briefly discussing the concept of Choi’s capability messages/capability of a worker to support a quantization level, Applicant does not appear to discuss or consider any of the above-cited teachings of Choi. Therefore, Applicant’s arguments regarding the 103 rejection are not persuasive, and the rejection is maintained below. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102€, (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1, 2, 4-6, 8, 11, 12, and 14-16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Choi et al. (US 20220245527 A1) in view of Sakai et al. (US 20230123756 A1). With respect to claims 1 and 11, Choi teaches a non-transitory storage medium having stored therein instructions that are executable by one or more hardware processor to perform operations comprising a method (e.g. paragraphs 0255-0256, described functions implemented in hardware, software executed by a processor, firmware, etc.; if implemented in software executed by a processor, functions stored as instructions/code on a computer readable medium); and the method, comprising: transmitting, by a central node to each edge node in a group of edge nodes, a quantization level (e.g. paragraph 0045, server and set of workers (UEs) implementing federated learning techniques to train machine learning models over wireless communication system; paragraph 0047, server determining quantization level for workers to use to compress gradient data and transmitting an indication of the quantization level to workers; paragraph 0079, Fig. 2, wireless communication system supporting adaptive quantization level selection in federated learning, including server 205 and workers 215; paragraph 0085, server determining for each worker a quantization level for gradient data output by respective local model based on various conditions; paragraph 0086, server transmitting respective quantization level indication to each of the workers; paragraph 0094, Fig. 3, server 305 transmitting first indication of first quantization level for gradient data to worker 310); receiving, by the central node from each of the edge nodes, a respective gradient vector created by the edge node based on operations of a respective instance of an ML (machine learning) model at the edge node, wherein each gradient vector has been quantized according to the quantization level (e.g. paragraph 0087, Fig. 2, each worker compressing gradient data according to the indicated quantization level, and transmitting the compressed gradient data to the server; paragraphs 0095-0097, worker calculating gradient data using local model, compressing the gradient data according to the first quantization level, and transmitting the compressed gradient data to the server, including an indication of the quantization level used to compress the gradient data), and the respective gradient vectors have been quantized, by the edge nodes based on computing resources available at the edge nodes (e.g. paragraph 0046, time it takes for worker to transmit quantized or compressed gradient data based on conditions of channel between worker and server (link budget, channel bandwidth, channel quality); paragraph 0048, server selecting and indicating quantization level for each iteration in training the global model; if conditions change, server selecting quantization level such that worker can transmit compressed gradient within some finite time, such as selecting a lower quantization when channel conditions worsen, resulting in lower data rates; paragraph 0085, 0091, selecting quantization level based on capability message from worker indicating supported quantization levels; paragraph 0102, Fig. 3, selecting lower quantization level such that worker may transmit compressed gradient data within finite time; paragraph 0121, reducing processing resources and power consumption associated with federated learning procedures; transmitting compressed gradient data based on indicated quantization level reduces time spend and quantity of resources used to transmit the gradient data; i.e. the quantization level is selected based at least in part on available link budget, bandwidth, quality, and supported quantization levels at the worker/edge nodes; Examiner notes that Applicant’s remarks filed October 29, 2025 appear to support the interpretation that resources available at the edge nodes include available network resources (see page 8, second full paragraph of Applicant’s remarks, “takes into account the resources available at the edge nodes….striking a balance between and acceptable level of ML model convergence…and available network resources for generating and transmitting the gradients) and based on an acceptable convergence of the ML model (e.g. paragraphs 0003, 0047, 0049, 0077, 0084, 0088, 0144, ensuring global convergence of machine learning model; use of quantization levels for gradient data output by local models to reduce latency and ensure global convergence; Examiner notes that the claims recite both an ML model generally, and an instance of the ML model specifically at the edge node, but do not appear to require that the acceptable convergence of the ML model be specifically limited to a convergence of the instance of the ML model at the edge node, as opposed to convergence of the ML model in general, such as convergence of the global ML model which the instance is based upon; this interpretation appears to be supported by the specification of the instant application, such as in paragraph 0013, which indicates that “acceptable level of model convergence…may be a function of the size of a gradient vector, and available network resources for transmitting the gradients, edge node resources…”, paragraph 0020, describing convergence of the single central model, and paragraph 0045, which describes improvements in “convergence of the model whose gradients are being sent from the edge nodes to the central node” which collectively appear to indicate that the relevant convergence is of the global/central model at the central node which uses the transmitted gradients, and not of any particular local model at an edge node); determining, by the central node, a lower quantization level than the quantization level (e.g. paragraph 0048, server selecting and indicating quantization level for each iteration in training the global model; if conditions change, server selecting quantization level such that worker can transmit compressed gradient within some finite time, such as selecting a lower quantization when channel conditions worsen, resulting in lower data rates; paragraph 0102, Fig. 3, selecting lower quantization level such that worker may transmit compressed gradient data within finite time); determining, by the central node, whether to use the quantization level and the lower quantization level (e.g. paragraph 0048, server selecting and indicating quantization level for each iteration in training the global model; if conditions change, server selecting quantization level such that worker can transmit compressed gradient within some finite time, such as selecting a lower quantization when channel conditions worsen, resulting in lower data rates; paragraph 0102, Fig. 3, selecting lower quantization level such that worker may transmit compressed gradient data within finite time); and based on an outcome of the determining, automatically adjusting the quantization level (e.g. paragraph 0048, server selecting and indicating quantization level for each iteration in training the global model; if conditions change, server selecting quantization level such that worker can transmit compressed gradient within some finite time, such as selecting a lower quantization when channel conditions worsen, resulting in lower data rates; paragraph 0102, Fig. 3, selecting lower quantization level such that worker may transmit compressed gradient data within finite time). Choi does not explicitly disclose: that the determining of the lower quantization level is done by re-quantizing the gradient vectors that have been received from the edge nodes, wherein the gradient vectors are re-quantized by the central node to a lower quantization level than the quantization level; that the determining whether to user the quantization level and the lower quantization level is done by validating, by the central node, the quantization level and the lower quantization level; based on an outcome of the validating, automatically adjusting the quantization level. However, Sakai teaches: that the determining of the lower quantization level is done by re-quantizing the gradient vectors that have been received from the edge nodes, wherein the gradient vectors are re-quantized by the central node to a lower quantization level than the quantization level (e.g. paragraph 0083, each of the gradients is a tensor (vector); paragraphs 0121-0126, Fig. 9, setting/resetting bit width S3, comparing calculated quantization error with threshold to determine quantization bit width, quantizing weights, activations, gradients, weight gradients, etc.; performing training using model subject to quantization; if set number of times of training not complete, returning to S3 resetting bit width and quantizing gradients S6 and S7 with the new bit width (i.e. requantizing the gradients prior to a subsequent round of training of the model); paragraph 0130, obtaining minimum bit width that makes the quantization error smaller than threshold; paragraph 0131, if process not complete for all layers, repeating process; paragraph 0132-0133, if recognition rate of quantized model is lower than recognition rate of FP32 by more than predetermined value, recognition rate of quantized model is not equivalent to recognition rate of FP32 and process ultimately returns to S11 (i.e. where the model/gradients are requantized in the instance that the quantized model does not have a recognition rate within a threshold, and where the requantization includes quantizing using a minimum allowable bit width and is therefore a lower level than an initial level); Fig. 14, showing that as quantization iterations proceed, the quantization bit width (step size) decreases until minimum bit width is reached (i.e. requantization using a lower bit width/step size); see also paragraph 0170, Fig. 15, quantizing parameters using determined bit widths; paragraphs 0175-0176, checking whether number of iterations reaches fixed value or all parameters have been quantized to minimum available bit width; if not, process returns to S22 (i.e. parameters are first quantized using a first determined bit width and, subsequently, if conditions are not met, the process is repeated, such that the parameters are requantized using another determined bit width)); that the determining whether to use the quantization level and the lower quantization level is done by validating, by the central node, the quantization level and the lower quantization level (e.g. paragraph 0130, obtaining minimum bit width that makes the quantization error smaller than threshold; paragraph 0131, if process not complete for all layers, repeating process; paragraph 0132-0133, if recognition rate of quantized model is lower than recognition rate of FP32 by more than predetermined value, recognition rate of quantized model is not equivalent to recognition rate of FP32 and process ultimately returns to S11; paragraphs 0146-0149, quantizing tensors and gradients generated in backpropagation, thereby shortening execution time of neural network; setting quantization threshold based on gradient of loss; determining bit width based on quantization threshold, quantizing each parameter with determined bit width; parameters quantized with newly determined bit width, and model loss after quantization is calculated and compared with loss limit; if model loss is smaller than loss limit, paragraph 0168, determining loss gradient using loss value estimated by training quantized model using validation dataset (compare with specification of the instant application at paragraphs 0045 and 0047, testing versions of the model against validation dataset and switching between quantization levels based on performance against the validation dataset); paragraphs 0171-0174, calculating model loss after quantization with new bit width, comparing calculated model loss with loss limit; either maintaining or discarding bit width and updating trust region radius according to comparison of model loss to loss limit; i.e. the various candidate quantization bit widths/levels are validated via comparison of the model loss resulting from quantization using the quantization bit widths to a loss limit/threshold); based on an outcome of the validating, automatically adjusting the quantization level (e.g. paragraph 0172-0174, Fig. 15, if model loss is smaller than the loss limit, newly determined bit width is maintained; if model loss is equal to or larger than the loss limit, the newly determined bit width is discarded and the model retains the bit width before; i.e. where a first/prior bit width is utilized to quantize gradients of a model and then a second/subsequent bit width (which may be a lower bit width/quantization level) is utilized, and then the resulting model losses are compared to a threshold/limit (analogous to validating the respective gradients), and based on the comparisons/validation, either the first/prior bit width/quantization level or the second/subsequent/loser bit width/quantization level is selected for use as the quantization level). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Choi and Sakai in front of him to have modified the teachings of Choi (directed to techniques for adaptive quantization level selection in federated learning), to incorporate the teachings of Sakai (directed to tensor quantization) to include the capability to, after performing a first training iteration using the first quantization level (quantization bit width), re-quantize the gradients using a lower quantization level, to validate the two different quantization levels by comparing respective model losses from the quantizations against a loss limit/threshold, and to select one of the two quantization levels for use in adjusting the quantization level based on the outcome of the comparison/validation of the quantization levels against the loss limit/threshold (as taught by Sakai), where Choi teaches that the gradients are received at the central node from the edge nodes, such that, when the teachings of Sakai are incorporated into the system of Choi, the central node would perform the requantization and validation of the quantization levels using gradient vectors received from the edge nodes. One of ordinary skill would have been motivated to perform such a modification in order to shorten execution time of neural networks as described in Sakai (paragraphs 0030-0032). With respect to claims 2 and 12, Choi in view of Sakai teaches all of the limitations of claims 1 and 11 as previously discussed, and Choi further teaches wherein one or more of the edge nodes comprises a respective mobile edge device (e.g. paragraph 0056, UE may include or be referred to as a mobile device/handheld device such as a personal electronic device, etc., among other examples which may be implemented in various objects including vehicles). With respect to claims 4 and 14, Choi in view of Sakai teaches all of the limitations of claims 1 and 11 as previously discussed, and Choi further teaches wherein automatically adjusting the quantization level comprises automatically adjusting the quantization level to the lower quantization level (e.g. paragraph 0048, server selecting and indicating quantization level for each iteration in training the global model; if conditions change, server selecting quantization level such that worker can transmit compressed gradient within some finite time, such as selecting a lower quantization when channel conditions worsen, resulting in lower data rates; paragraph 0077, adaptively selecting and indicating quantization levels; paragraph 0102, Fig. 3, selecting lower quantization level such that worker may transmit compressed gradient data within finite time). With respect to claims 5 and 15, Choi in view of Sakai teaches all of the limitations of claims 1 and 11 as previously discussed, and Sakai further teaches wherein the validating comprises determining, as between the quantization and lower quantization level, which quantization level enables better performance of a machine learning model with which the gradient vectors are associated (e.g. paragraph 0004, quantization is known technique for shortening execution time of neural networks; paragraph 0072 and 0130, obtaining minimum bit width that makes the quantization error smaller than the threshold; paragraph 0115-0116, high degree of quantization increases risk of accuracy deterioration; performing accuracy assurance; when recognition rate of quantized model determined to be deteriorated from recognition rate of FP32, parameters are set again, and quantization is performed again; paragraphs 0172-0174, Fig. 15, comparing respective model losses with loss limit; if model loss is within loss limit, maintaining newly determined bit width; if model loss is not within loss limit, discarding newly determined bit width and retaining previous bit width; i.e. one of the two quantization levels is selected based on it meeting criteria of providing the lowest possible quantization level (which enables faster/more efficient execution of the model) while also keeping resulting quantization error, such as loss of accuracy, within a threshold amount; therefore the quantization level is selected which provides the best performance with respect to the combined speed and accuracy of the neural network/model). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Choi and Sakai in front of him to have modified the teachings of Choi (directed to techniques for adaptive quantization level selection in federated learning), to incorporate the teachings of Sakai (directed to tensor quantization) to include the capability to select the quantization level with provides the best performance with respect to both model speed and model accuracy. One of ordinary skill would have been motivated to perform such a modification in order to shorten execution time of neural networks as described in Sakai (paragraphs 0030-0032). With respect to claims 6 and 16, Choi in view of Sakai teaches all of the limitations of claims 1 and 11 as previously discussed, and Choi further teaches wherein automatically adjusting the quantization level is based in part on bandwidth constraints and/or a size of a machine learning model with which the gradient vectors are associated (e.g. paragraph 0048, server selecting and indicating quantization level for each iteration in training the global model; if conditions change, server selecting quantization level such that worker can transmit compressed gradient within some finite time, such as selecting a lower quantization when channel conditions worsen, resulting in lower data rates; paragraph 0102, Fig. 3, selecting lower quantization level such that worker may transmit compressed gradient data within finite time; paragraph 0136, quantization level is based on a bandwidth of a channel for transmitting the compressed gradient data). With respect to claims 8 and 18, Choi in view of Sakai teaches all of the limitations of claims 1 and 11 as previously discussed, and further teaches wherein one of the gradient vectors comprises a change that one of the edge nodes has made to a machine learning model deployed at the/that edge node (e.g. paragraph 0078, UE updating parameters of local model, computing gradient data using the updated local model, compressing the gradient data using the quantization level, and transmitting the compressed gradient data to the server; paragraph 0082, combining gradient data to update the global model; paragraph 0090, weights and parameters of dimensional parameters associated with layers of model; gradient data associated with the dimensional parameters received from the set of workers; paragraph 0126, gradient data output based on updating the machine learning model). Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Choi in view of Sakai, further in view of Abelha Ferreira et al. (US 20220138498 A1). With respect to claims 3 and 13, Choi in view of Sakai teaches all of the limitations of claims 1 and 11 as previously discussed. Choi and Sakai do not explicitly disclose wherein the quantizing comprises performing sign compression on the gradient vectors. However, Abelha Ferreira teaches wherein the quantizing comprises performing sign compression on the gradient vectors (e.g. paragraphs 0029-0030, compression achieved by sending vector of same number of elements as gradient vector which only includes signs of each gradient in the gradient vector, referred to as the gradient sign vector; instead of transmitting gradients using 32 bit values, transmitting using a single bit (i.e. where this reduction in bit widths/sizes used for gradient vector transmission is analogous to quantization according to a quantization level); such a compression scheme is referred to as sign compression). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Choi, Sakai, and Abelha Ferreira in front of him to have modified the teachings of Choi (directed to techniques for adaptive quantization level selection in federated learning) and Sakai (directed to tensor quantization), to incorporate the teachings of Abelha Ferreira (directed to compression switching for federated learning) to include the capability to perform, as the quantization/compression, sign compression of the gradient vectors. One of ordinary skill would have been motivated to perform such a modification in order to address potential data privacy concerns and reduce the amount of data being sent and thereby network bandwidth necessary to transmit results, while retaining an acceptable level of prediction accuracy as described in Abelha Ferreira (paragraphs 0029-0030). Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Choi in view of Sakai, further in view of Moradi et al. (US 20230041074 A1). With respect to claims 9 and 19, Choi in view of Sakai teaches all of the limitations of claims 1 and 11 as previously discussed. Choi and Sakai do not explicitly disclose wherein the edge nodes are able to enter, or leave, at any time, a federation that includes the edge nodes. However, Moradi teaches wherein the edge nodes are able to enter, or leave, at any time, a federation that includes the edge nodes (e.g. paragraph 0010, dynamically updating grouping of local client nodes that participate in distributed machine learning; paragraph 0046, local client nodes may join new federation or leave an old federation from time to time, creating situation where new local client node is joining a federation that has already begun to train a central ML model; policies adopted for late joining local client nodes, etc.). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Choi, Sakai, and Moradi in front of him to have modified the teachings of Choi (directed to techniques for adaptive quantization level selection in federated learning) and Sakai (directed to tensor quantization), to incorporate the teachings of Moradi (directed to distributed machine learning using network measurements) to include the capability for edge nodes to enter or leave the federation freely/at any time. One of ordinary skill would have been motivated to perform such a modification in order to improve the performance of models trained in a distributed manner, while preventing joining and leaving of local client nodes to the federation from negatively impacting performance of the model, and permitting overall model accuracy to be sustained when a local client node joins late in training as described in Moradi (paragraphs 0009-0010, 0046). Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Choi in view of Sakai, further in view of Tang et al. (US 20240388383 A1). With respect to claims 10 and 20, Choi in view of Sakai teaches all of the limitations of claims 1 and 11 as previously discussed, and Sakai further teaches wherein: when the validating indicates that the lower quantization level yields better performance, relative to the quantization level, of a machine learning model with which the gradient vectors are associated, the lower quantization level is adopted (e.g. paragraph 0004, quantization is known technique for shortening execution time of neural networks; paragraph 0072 and 0130, obtaining minimum bit width that makes the quantization error smaller than the threshold; paragraph 0115-0116, high degree of quantization increases risk of accuracy deterioration; performing accuracy assurance; when recognition rate of quantized model determined to be deteriorated from recognition rate of FP32, parameters are set again, and quantization is performed again; paragraphs 0172-0174, Fig. 15, comparing respective model losses with loss limit; if model loss is within loss limit, maintaining newly determined bit width; if model loss is not within loss limit, discarding newly determined bit width and retaining previous bit width; i.e. one of the two quantization levels is selected based on it meeting criteria of providing the lowest possible quantization level (which enables faster/more efficient execution of the model) while also keeping resulting quantization error, such as loss of accuracy, within a threshold amount; therefore the quantization level is selected which provides the best performance with respect to the combined speed and accuracy of the neural network/model, including selection of the lower quantization level, when it provides both the highest degree of shortened execution time while maintaining an acceptable level of model accuracy). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Choi and Sakai in front of him to have modified the teachings of Choi (directed to techniques for adaptive quantization level selection in federated learning), to incorporate the teachings of Sakai (directed to tensor quantization) to include the capability to select the quantization level with provides the best performance with respect to both model speed and model accuracy. One of ordinary skill would have been motivated to perform such a modification in order to shorten execution time of neural networks as described in Sakai (paragraphs 0030-0032). Choi and Sakai do not explicitly disclose a counter set to zero; and when a value of the counter is greater than a number of federated learning cycles that are run before testing a different quantization level, the quantization level is increased to a higher quantization level. However, Tang teaches a counter set to zero; and when a value of the counter is greater than a number of federated learning cycles that are run before testing a different quantization level, the quantization level is increased to a higher quantization level (e.g. paragraph 0131, switching from first communication mode to second communication mode after predetermined number of training iterations have occurred; paragraph 0154, quantization precision level used in high-reliability mode is higher than quantization precision level used in low-reliability mode; paragraph 0155, using lower precision quantization in lower reliability mode during early phase of training and switching to operate in high reliability mode afterwards to further improve training performance; Figs. 8-10, showing an initial training activation step, prior to any training iterations, followed by N training iterations using a low reliability mode, then switching and performing additional training iterations using a high reliability mode; i.e. a number of iterations is counted in order to know when to switch from a mode using a lower quantization level to a mode using a higher quantization level, where this counted number of iterations may be 0 in an initial phase of the process during which the lower quantization level is used, and then 1 during the first training iteration all the way up to N, and then once the number of iterations counted becomes greater than N (i.e. counter value is greater than number of federated training cycles run before testing a different quantization level), the mode is switched to use a higher quantization level). Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention having the teachings of Choi, Sakai, and Tang in front of him to have modified the teachings of Choi (directed to techniques for adaptive quantization level selection in federated learning) and Sakai (directed to tensor quantization), to incorporate the teachings of Tang (directed to reliability adaptation for artificial intelligence training, such as federated training) to include the capability to count a number of training iterations in order to know when to switch from a mode using a lower quantization level to a mode using a higher quantization level, where this counted number of iterations may be 0 in an initial phase of the process, and then 1 during the first training iteration all the way up to N, and then once the number of iterations counted becomes greater than N, the mode is switched to use a higher quantization level (as taught by Tang). One of ordinary skill would have been motivated to perform such a modification in order to reduce communication overhead during early phases of training, while permitting further improvements in training performance in later phases of training as described in Tang (paragraph 0009). It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. “The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain,” In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting in re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (GCPA 1968)). Further, a reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including nonpreferred embodiments. Merck & Co, v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert, denied, 493 U.S. 975 (1989). See also Upsher-Smith Labs. v. Pamlab, LLC, 412 F,3d 1319, 1323, 75 USPQ2d 1213, 1215 (Fed. Cir, 2005): Celeritas Technologies Ltd. v. Rockwell International Corp., 150 F.3d 1354, 1361, 47 USPQ2d 1516, 1522-23 (Fed. Cir. 1998). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEREMY L STANLEY whose telephone number is (469)295-9105. The examiner can normally be reached on Monday-Friday from 9:00 AM to 5:00 PM CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Al Kawsar, can be reached at telephone number (571) 270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /JEREMY L STANLEY/ Primary Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Aug 26, 2022
Application Filed
Aug 09, 2025
Non-Final Rejection — §103
Oct 29, 2025
Response Filed
Jan 10, 2026
Final Rejection — §103
Mar 23, 2026
Interview Requested
Apr 02, 2026
Applicant Interview (Telephonic)
Apr 02, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591827
ETHICAL CONFIDENCE FABRICS: MEASURING ETHICAL ALGORITHM DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12580783
CONFIGURING 360-DEGREE VIDEO WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12572266
ACCESSING AND DISPLAYING INFORMATION CORRESPONDING TO PAST TIMES AND FUTURE TIMES
2y 5m to grant Granted Mar 10, 2026
Patent 12561041
Systems, Methods, and Graphical User Interfaces for Interacting with Virtual Reality Environments
2y 5m to grant Granted Feb 24, 2026
Patent 12555684
ASSESSING A TREATMENT SERVICE BASED ON A MEASURE OF TRUST DYNAMICS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
92%
With Interview (+44.7%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 276 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month