Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s amendment filed 2 January 2026 introduces new issues of 35 U.S.C. 112 discussed below.
Applicant’s arguments filed 2 January 2026 have been fully considered but they are moot in view of the new grounds of rejection presented in this Office action.
Note applicant argues the claims as amended.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The specification as originally filed does not support the newly amended features of evaluating whether gradients obtained for the clusters for the round are converging towards a stable convergency state and stopping the clustering operation when the convergence check indicates that gradients from the nodes are converging toward the stable convergency state prior to reaching a predefined maximum number of clustering rounds.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The specification as originally filed does not support the newly amended features of evaluating whether gradients obtained for the clusters for the round are converging towards a stable convergency state and stopping the clustering operation when the convergence check indicates that gradients from the nodes are converging toward the stable convergency state prior to reaching a predefined maximum number of clustering rounds.
Art rejection is applied to claims 1-20 as clustering rounds stop once stable convergency is obtained before reaching a threshold number of rounds.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 are is/are rejected under 35 U.S.C. 103 as being unpatentable over Velicheti, Raj Kiriti, Derek Xia, and Oluwasanmi Koyejo. "Secure byzantine-robust distributed learning via clustering." arXiv preprint arXiv:2110.02940 (2021), of record, provided by the applicant, further in view of Chu et al (US 20210374617 A1).
Regarding claim 1, Velicheti substantially discloses a method comprising:
performing a clustering operation in a round of federated learning, wherein nodes participating in the federated learning are grouped into clusters (see Fig.1 at page 2);
determining a gradient for the clusters for the round (see at least Problem Formulation at page 3);
performing a convergence check operation (see Convergence Analysis at page 4);
the difference is Velicheti does not specifically show the convergence check operation “evaluates whether gradients obtained for the clusters for the round are converging towards a stable convergency state”.
However it is customary in the art to do so as shown by Chu (see at least [0062]… After determining that the convergence condition is satisfied, the central node 110 may notify each client 102 that the training phase is ended.)
it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include such features while implementing the method of Velicheti in order to determine whether convergence is satisfactory.
Velicheti/Chu further teaches
performing another round of clustering if the convergence check operation fails to indicate that the gradients are converging toward the stable convergence sate and stopping the clustering operation when the convergence check indicates that gradients from the nodes are converging toward the stable convergency state (see Velicheti Fig 1 rounds 1 and 2 at page 2, Chu [0052]… The determination whether the convergence condition is satisfied may be performed by the central node 110, or in some cases may be performed by individual clients 102. If the convergence condition is not satisfied, the method 400 may return to step 404 to perform another round of training); and
updating a model with the gradients when the convergence check operation succeeds (see Velicheti Introduction second paragraph at page 1).
Regarding claim 2, Velicheti/Chu further teaches the method of claim 1, further comprising performing secure aggregation for the gradients (see Velicheti page 2 last paragraph).
Regarding claim 3, Velicheti/Chu further teaches or suggests the method of claim 2, further comprising performing robust aggregation for the gradients to generate a final gradient for the round (see Velicheti page 2 first paragraph).
Regarding claim 4, Velicheti/Chu further teaches or suggests the method of claim 3, wherein the final gradient for the round is derived from a list of gradients (see Velicheti 5.2.1 Convergence Rates at page 5).
Regarding claim 5, Velicheti/Chu further teaches or suggests the method of claim 4, wherein the convergence check operation includes determining a centroid for the list of gradients in the round and determining a second centroid corresponding to the list of gradients for a previous round (see Velicheti "Robust Aggregation" distance-based aggregation at page 2).
Regarding claim 6, Velicheti/Chu further teaches or suggests the method of claim 5, further comprising determining a distance between the centroid and the second centroid (see Velicheti "Robust Aggregation" distance-based aggregation at page 2).
Regarding claim 7, Velicheti/Chu further teaches the method of claim 6, wherein the convergence check fails when the distance is greater than a threshold distance (see Velicheti 6.1.1 Impact of cluster size, last paragraph at page 6).
Regarding claim 8, Velicheti/Chu further teaches the method of claim 6, wherein the convergence is determined when the distance is less than a threshold distance (see Velicheti B.1 Distance based robust aggregation at page 10).
Regarding claim 9, Velicheti/Chu further teaches or suggests the method of claim 1, further comprising performing a warm-up operation that includes a minimum number of rounds, wherein the convergence check operation is performed after the minimum number of rounds have been completed (see Velicheti page 4, Algorithm 1, re-clustering rounds).
Regarding claim 10, Velicheti/Chu further teaches or suggests the method of claim 1, further comprising dynamically adjusting a maximum number of rounds when convergence fails after performing the maximum number of rounds, wherein the maximum number of rounds is less than or equal to an upper limit of rounds (see Velicheti page 1 Introduction model updates, page 7 section 6.1.2 Impact of re-clustering).
Claims 11-20 essentially recite limitations similar to claims 1-10 in form of non-transitory computer storage medium thus are rejected for the same reasons discussed in claims 1-10 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
ANWAR et al (US 20220156574 A1) teach a computer-implemented method for training a machine learning model, the method comprising performing, by a computing device, a plurality of training iterations, wherein each training iteration comprises inputting a set of training data to the machine learning model, determining an output of the model from processing the set of training data, and updating one or more parameters of the model based on the output of the model, the method further comprising, for one or more of the training iterations, determining, based on the output of the model for the training iteration, a measure of the stability of the model; and determining, based on the stability of the model, whether to send the updated model parameters via a communication channel to a remote computing device.
Song et al (WO 2022033579 A1) teach a federated learning method, device and system, which are used for improving the robustness of a federated learning system. The method comprises: a first client receiving, from a serving end, a first value of a parameter of a machine learning model, wherein the first client is one of a plurality of clients; where the first value of the parameter does not meet a first condition, the first client performing the present round of training according to first training data, the machine learning model and a local value of the parameter, so as to obtain a training result of the present round of training, wherein the first training data is data reserved at the first client; and the first client sending the training result and alarm information to the serving end, wherein the alarm information indicates that the first value of the parameter does not meet a requirement.
Liu Ji et al (EP 4113394 A2) teach a federated learning method, apparatus and system, an electronic device, and a storage medium, which relate to a field of an artificial intelligence technology, in particular to fields of computer vision and deep learning technologies. A specific implementation solution includes: performing a plurality of rounds of training until a training end condition is met, so as to obtain a trained global model; and publishing the trained global model to a plurality of devices. Each round of training in the plurality of rounds of training includes: transmitting a current global model to at least some devices in the plurality of devices; receiving trained parameters for the current global model from the at least some devices; performing an aggregation on the received parameters to obtain a current aggregation model; and adjusting the current aggregation model based on a globally shared dataset, and updating the adjusted aggregation model as a new current global model for a next round of training.
Liu et al (WO 2022226903 A1) teach is a federated learning method for a k-means clustering algorithm. The method comprises longitudinal federated learning and transverse federated learning. The transverse federated learning comprises the following steps: 1) initializing K clusters, and different participators distributing local samples to a cluster that is closest to the samples; 2) for each cluster, calculating a new clustering center of the cluster; and 3) if the clustering center changes, returning to step 1). The longitudinal federated learning comprises the following steps: 1) L participators respectively and locally running a k-means clustering algorithm to obtain T clusters and performing intersection to obtain new TL clusters, or performing an AP clustering algorithm to obtain Ti clusters and performing intersection to obtain new (I) clusters; 2) using new (I) clustering centers as input samples, and initializing K clusters; 3) distributing each sample to a cluster that is closest to the sample; 4) for each cluster, calculating the new clustering center of the cluster; and 5) if the clustering center changes, returning to step 3).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UYEN T LE whose telephone number is (571)272-4021. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay M Bhatia can be reached at 5712723906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UYEN T LE/Primary Examiner, Art Unit 2156 24 February 2026