Prosecution Insights
Last updated: April 19, 2026
Application No. 18/332,974

CLIENT MODEL TRAINING METHOD IN DECENTRALIZED LEARNING ENVIRONMENT AND CLIENT DEVICE PERFORMING THE SAME

Non-Final OA §103§112
Filed
Jun 12, 2023
Examiner
ALSHAHARI, SADIK AHMED
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
35%
Grant Probability
At Risk
1-2
OA Rounds
4y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
12 granted / 34 resolved
-19.7% vs TC avg
Strong +47% interview lift
Without
With
+47.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103 §112
DETAILED ACTION Status of Claims Claim(s) 1-16 are pending and are examined herein. Claim(s) 1-16 are rejected under 35 U.S.C. § 112 and 103. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement IDS(s) submitted on June 12, 2023 is in compliance with the provisions of 37 CFR 1.97 and have been considered by the examiner. Priority Acknowledgment is made of the applicant’s claim to foreign priority Application No. 10-2022-0168796 filed on December 6, 2022. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, for pre-AIA the applicant regards as the invention. Regarding Claim 1, the claim recites limitations that render the scope of the claimed invention indefinite for the following reasons: The claim recites the limitation “generating a candidate model by performing learning on the model;” lines 4-5. The claim fails to provide a clear antecedent for “the model.” It is unclear whether the term refers to a local model, a client model, a pre-existing model. This lack of clarity renders the scope of the claim undefined. The claim recites the limitation “transmitting a training result for sharing the candidate model to a plurality of other clients within a critical time; .... as the critical time exceeds,” lines 6-11. The recitation of “critical time” defines a relative term “critical” with no objective boundary for determining the scope. The specification fails to provide a standard for establishing the requisite degree and one of ordinary skill in the art would not be reasonably apprised by the scope of the claim. The claim further recites “receiving other training results from at least one other client;” lines 9-10. It is unclear whether this limitation refers to receiving “the training results” from the previous limitation or some other training result generated by different clients. Additionally, the phrase “at least one other client” lacks a clear antecedent basis, as it is unclear whether it refers to at least one of the previously recited plurality of other clients or to a different new “other client.” This lack of clarity renders the claim indefinite. The claim further recites “and other training results (hereinafter, all training results) according to a predefined consensus algorithm;” lines 12-14. The recitation of “other training results” lacks a clearly antecedent basis to clearly refer back to the previously recited “other training results” in the earlier limitation. The recitation of “(hereinafter, all training results)” improperly attempts to redefine “the training result and other training results” without clearly defining the scope of what represents “all training results.” The term “hereinafter” introduces uncertainty as to whether this step applies only within the remaining steps and extended to dependent claim, making the scope of the claim process undefined. Lastly, the claim recites the limitation “performing an update to the consented model.” Line 15. The claim fails to provide an antecedent for “the consented model.” While the earlier limitation recites “performing model consensus,” the claim does not explicitly define the claim element “consented model.” Thus, the scope of the claimed update step is unclear. For at least the above reasons, claim 1 does not particularly point out and distinctly claim the invention and it therefore indefinite under 35 USC § 112(b). Regarding Claim 2, the claim recites the limitation “wherein, in the generating of the candidate model by training the model, an i-th candidate model is generated through a training process for a i-1-th (i is a natural number) model consented lastly.” However, the claim does not clearly define the training process, the model elements, and lacks a clear antecedent basis. Claim 2, which depends from parent claim 1, recites “the model” without a clear antecedent basis for what “the model” refers to (the candidate model itself, a pre-existing model, or the different client model). Additionally, the term “i-1-th” is indefinite because it is unclear whether this represents a naming or labeling notation to identify a prior iteration model, or a mathematical subtraction operation. Furthermore, it is unclear whether the training process must initialize the i-th candidate model using the previous i-1-th consented model or what represents “model consented lastly.” Because the claim fails to clearly define the training process and how the previous model is incorporated, a person of ordinary skill in the art cannot determine with reasonable certainty the scope of the training process. Regarding Claim 3, the claim recites the limitation “wherein the generating of the candidate model by performing the model includes specifying a hash value for the i-1-th model consented lastly through a hash function.” Claim 1, from which claim 3 depends, does not define the claimed element “the i-1-th model consented lastly.” Thus, the recitation of “the i-1-th model consented lastly” lacks a clear antecedent basis in the claim. Additionally, the claim does not clearly define the scope or use of the hash operation, making it unclear which model is to be hashed or how hash is applied. As a result, claim 3 renders the scope of the claim indefinite. Regarding Claim 4, the claim recites the limitation “transmitting other training results received from the at least one other client to other clients that have not received at least one of all the training results.” The claim recites the terms “other training results” and “other clients” without a clear antecedent basis of these terms in the claims. It is unclear whether “other training results” refers to the results received in claim 1 or additional results from another client. It is also unclear whether “other clients” refers to the “plurality of other clients” recited in claim 1, from which claim 3 depends, or to a different set of clients. Because the antecedent basis of these terms is unclear, a person of ordinary skill in the art cannot determine with reasonable certainty the scope of the claimed invention. Regarding Claim 5, the claim recites the limitation “wherein, in the performing of the model consensus on all the training results according to the consensus algorithm, the model consensus is performed based on data existing in any one of the plurality of clients that is selected as a leader.” The claim recites the terms “all the training results,” “the consensus algorithm,” and “the plurality of clients” that lacks a clear antecedent basis. Claim 1, from which claim 5 depends, recites “a plurality of other clients” and “a predefined consensus algorithm.” It is unclear whether “all the training results” includes only the training results, the other training results, or both results recited in claim 1. It is also unclear whether the recitation of “the plurality of clients” refers to the previously recited plurality of other clients or a different set of clients, and whether “the consensus algorithm” refers to the predefined consensus algorithm of claim 1 or a different consensus algorithm. For at least these reasons, the scope of claim 5 is unclear, and one of ordinary skill in the art would not be reasonably apprised of the metes and bounds of the claimed invention. Regarding Claim 6, the claim recites the limitations “wherein the performing of the model consensus on all the training results according to the consensus algorithm includes: selecting any one of the plurality of clients as the leader according to a round robin method; configuring data existing in a client as a test set according to being selected as the leader; checking accuracy of all candidate models based on the test set; and performing the model consensus with a candidate model with highest accuracy.” Claim 6 recites “checking accuracy of all candidate models based on the test set,” whereas claim 1, from which claim 6 depends” recites “a candidate model” without defining a plurality or clarifying what constitutes “all candidate models. Claim 6 further recites “configuring data existing in a client as a test set according to being selected as the leader;” but fails to provide a clear antecedent basis for “a client” or clarify whether the data existing in the selected client leader, from each client, or any client amount the plurality. As a result, the source of the test set cannot be reasonably determined. Additionally, claim 6 recites “performing the model consensus with a candidate model with highest accuracy” which lacks clarity as to whether this refers to “a candidate model” previously recited in claim 1 or another candidate model from the recited all candidate models. Accordingly, one of ordinary skill in the art would not be able to ascertain, with reasonable certainty, the scope of the claimed invention. Regarding Claim 7, the claim recites limitations “wherein, in the performing of the model consensus on all the training results according to the consensus algorithm, the model consensus is performed based on a test set configured by a client in a group configured according to a predetermined condition among the plurality of clients.”, which includes elements that render the claim indefinite. The recitations of “the consensus algorithm” and “the plurality of clients” lack clear antecedent basis, because claim 1 recites “a predefined consensus algorithm” and “a plurality of other clients.” Claim 7 does not clearly indicate whether it refers to those previously recited elements or to different elements, making the scope of the claim undefined. The recitation of “a client in a group” lacks antecedent basis and fails to identify which client configures the test set, making it unclear whether the client is selected from the plurality of other clients or is a different client group. Furthermore, the recitation of “a predetermined condition” renders the claim indefinite because the claim does not clearly define what constitute as predefined condition or to provide objective criteria for configuring the group of clients according to this condition. As a result the scope of the claim is undefined and the specification does not provide a standard for determining the scope of the claimed predetermined condition. Regarding Claim 8, the claim recites limitations “wherein, in the performing of the model consensus on all the training results according to the consensus algorithm, a group including a client proposing a candidate model among the plurality of clients is configured, the test set including noise by each of the clients in the group is configured, and as accuracy of all candidate models is calculated based on the test set and shared with clients in the group, the model consensus is performed based on the accuracy.”, which introduces elements that render the claim indefinite. Specifically, the recitation of “the consensus algorithm,” “a group including a client,” “a candidate model,” and “clients in the group” fail to clearly identify whether these terms refer to elements previously recited in the claims from which claim 8 depends or introduce different elements, making the scope of the claim undefined. Accordingly, one of ordinary skill in the art would not be reasonably apprised of the scope of the claim. Regarding Claims 9-16, the claims recite substantially similar limitations as those of claims 1-8 and are rejected for similar reasons and rationale. In view of the above, the Examiner respectfully requests that Applicant thoroughly review the claims for compliance with the requirements set forth under 35 U.S.C. § 112. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-2,5, 7, 9-10, 13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Li et al., (NPL: "A Blockchain-based Decentralized Federated Learning Framework with Committee Consensus." (2020)) in view of Zhang et al., (NPL: "Refiner: A Reliable Incentive-Driven Federated Learning System Powered by Blockchain." (2021)). Regarding Claim 1, Li discloses the following: A client model training method in a centralized learning environment, the client model training method comprising: (Li, [Pp. 2-3, Section: III] “Federated Learning (FL) enables the machine learning algorithms training across multiple distributed clients without exchanging their data samples. In the original FL settings, one centralized server takes control of the training process, including client management, global model maintenance, and gradient aggregation. During each training round, the server broadcasts the current model to some participating nodes. After receiving the model, nodes locally update it with their local data and submit the update gradients to the server. The server then aggregates and applies the local gradients into the model for the next round. The decentralized nature of blockchain can replace the place of the central server. As aforementioned, the functions of the centralized server can be implemented by the Smart Contract (SC) instead, and be actuated by transactions on the blockchain. To tackle this vision, we propose BFLC, which is a Blockchain-based Federated Learning framework with Committee consensus. Without any centralized server, the participating nodes perform FL via blockchain, which maintains the global models and local updates.”) generating a candidate model by performing learning on the model; (Li, [Pp. 2-3, Section: III] “In the beginning, a randomly initialized model was placed into the #0 block, then the 0-th round of training starts. Nodes access the current model and execute local training, and put the verified local gradients to new update blocks.” [P. 4, Section: III-C] “Nodes other than committees perform local training each round. In FL, for the sake of security and privacy, raw data will be kept in nodes locally, and these nodes only upload the gradients to the blockchain. ... Nodes can actively obtain the current global model at any time and perform local training. The gradients will be sent to the committee and be validated.”) [Examiner’s Note: Li teaches participant nodes (i.e., clients) locally training the model using their data (i.e., generating a candidate model).] transmitting a training result for sharing the candidate model to a plurality of other clients within a critical time; (Li, [Pp. 3-4, Section: III-B] “Considering the computation and communication cost of consensus, we propose an efficient and secure Committee Consensus Mechanism (CCM) to validate the local gradients before appending it to the chain. Under this setting, a few honest nodes will constitute a committee in charge of verification of local gradients and blocks generation. In the meantime, the rest nodes execute local training and send the local updates to the committee.” [P. 4, Section: III-C] “Nodes can actively obtain the current global model at any time and perform local training. The gradients will be sent to the committee and be validated. When eligible updates are packaged on the blockchain, as a reward, tokens can be attached to them.” Fig. 1. The training process of the proposed BFLC framework. (1) Training nodes acquire the newest global model and perform local training. (2) Training nodes send local updates to committee (3) Committee validate the updates and record new model or updates onto blockchain.) [Examiner’s Note: Li transmission to committee/other participant nodes through round-based k-update which implicitly define a time.] receiving other training results from at least one other client; (Li, [P. 4, Section: III-B] “In the meantime, the rest nodes execute local training and send the local updates to the committee. The committee then validates the updates and assign a score on them.”) [Examiner’s Note: committee members receive local updates (training results) from other training nodes (i.e., other clients). The committee validates these received updates.] ... performing model consensus on the training result and other training results (hereinafter, all training results) according to a predefined consensus algorithm; and performing an update to the consented model. (Li, [P. 3, Section: III-A] “When there are continuously enough update blocks, the smart contract triggers the aggregation, and a new model of the next round is generated and placed on the chain.” [Pp. 3-4, Section: III-B] “The competition-based consensus mechanisms append blocks on the chain first, whereafter, the consensus meets. Conversely, the communication-based generate mechanisms reach an agreement before appending blocks. Considering the computation and communication cost of consensus, we propose an efficient and secure Committee Consensus Mechanism (CCM) to validate the local gradients before appending it to the chain. Under this setting, a few honest nodes will constitute a committee in charge of verification of local gradients and blocks generation. In the meantime, the rest nodes execute local training and send the local updates to the committee. The committee then validates the updates and assign a score on them. Only the qualified updates will be packed onto the blockchain. At the beginning of the next round, a new committee is elected basing on the scores of nodes in the previous round, which means that the committee will not be re-elected. It is noteworthy that the update validation is a pivotal component of the CCM, therefore, we describe a feasible approach: the committee members validate the local updates by treating their data as a validation set, and the validation accuracy becomes the score. This is the minimized approach that acquires no further operation of the committee, but only the basic ability to run the learning model. After combining the scores from the various committee members, the median will become the score of this update.” [P. 4, Section: III-C] “As aforementioned, a certain number of valid updates are required for each round. Therefore, when the committee validates enough local updates, the aggregation process is activated. These validated updates are aggregated by the committee into a new global model. The aggregation can be performed on the local gradients [11] or the local models [12], and the network transmission consumptions of these two methods are equal. After the new global model is packed on the blockchain, the committee will be elected again, and the next training round begins.” Further see [0028] and [0042]-[0044].) [Examiner’s Note: The CCM is the model consensus algorithm based on model validation and the new updated model packed on the blockchain becomes the next new model (i.e., consented model).] While Li describes the process of local training of nodes within a decentralized environment through each round and the committee consensus mechanism performing validation to determine the next updated global model, Li is salient on whether performing model consensus algorithm occurs as a critical time exceeds. However, it would have been obvious in view of Zhang to implement a time that triggers the transition from the training phase to the consensus/evaluation phase. Hereinafter, Li in view of Zhang teaches the following: performing model consensus on the training result and other training results (hereinafter, all training results) according to a predefined consensus algorithm; and performing an update to the consented model (Zhang, [P. 3, Section: 3.2)] “2.3.5 Local Training. Each worker 𝑖 retrieves the global model 𝑤 from IPFS with the file handle and decrypts it with the key. Next, the worker updates the global model 𝑤 on its local dataset. After computing the local model updates 𝑤𝑖 , the worker 𝑖 securely stores 𝑤𝑖 in IPFS and shares𝑤𝑖 with the validators committee by invoking SubmitTrainingResults() with the file handle and encrypted keys of 𝑤𝑖 . After all workers have committed their training results or a timer has expired, the status of the task becomes EVALUATING. 2.3.6 Model Evaluation and Aggregation. Each validator in the validators committee retrieves the validation dataset and local model updates using the file handles and encrypted keys. The validator first evaluates each local model 𝑤𝑖 on the validation dataset 𝐷 by computing L (𝐷;𝑤𝑖) where L is the loss function. To prevent corrupt local model updates, only qualified local model updates are accepted. ... After all commitments are submitted or a timer expired, the reveal phase starts and the task status is changed to REVEALING. Each validator sends M𝑖 and 𝑠𝑖 to the task contract. ... A majority rule is adopted to determine a consensus manifest. Validators who produce manifests that agree with the consensus manifest will be rewarded with an equal amount of Ether. Workers are rewarded according to their contributions as recorded in the consensus manifest.”) [Examiner’s Note: Zhang describes the consensus process through validation of workers training results and majority rule to determine global model for the next round.] Accordingly, at the effective filing date, it would have been prima facie obvious to one ordinarily skilled in the art of machine learning to modify the combination of Li and Zhang to incorporate the Refiner techniques for a reliable federated learning system as taught by Zhang. One would have been motivated to make such a combination in order to check the quality of local updates to prevent malicious participants from disrupting the system (Zhang [Section: 2]). Regarding Claim 2, Li in view of Zhang teaches the elements of claim 1 as outlined above, and further teaches: wherein, in the generating of the candidate model by training the model, an i-th candidate model is generated through a training process for a i−1-th (i is a natural number) model consented lastly. (Li, [p. 3, Section: III-A] “In the beginning, a randomly initialized model was placed into the #0 block, then the 0-th round of training starts. Nodes access the current model and execute local training, and put the verified local gradients to new update blocks. When there are continuously enough update blocks, the smart contract triggers the aggregation, and a new model of the next round is generated and placed on the chain. ... We denote the number of required updates for each round as k, and denote the number of rounds as t = 0, 1, .... Then we have: the # t×(k+ 1) block contains the model of t-th round, which is called model block, and the # [t×(k+1)+1,(t+1)× (k+1)−1] blocks contain the updates of t-th rounds, which are called update blocks.” [p. 4, Section: III-C] “These validated updates are aggregated by the committee into a new global model. ... After the new global model is packed on the blockchain, the committee will be elected again, and the next training round begins.”) Regarding Claim 5, Li in view of Zhang teaches the elements of claim 1 as outlined above, and further teaches: wherein, in the performing of the model consensus on all the training results according to the consensus algorithm, the model consensus is performed based on data existing in any one of the plurality of clients that is selected as a leader. (Dong, [P. 4, Section: III-B] “In the meantime, the rest nodes execute local training and send the local updates to the committee. The committee then validates the updates and assign a score on them. Only the qualified updates will be packed onto the blockchain. ... It is noteworthy that the update validation is a pivotal component of the CCM, therefore, we describe a feasible approach: the committee members validate the local updates by treating their data as a validation set, and the validation accuracy becomes the score. ... the local data of the committee are taken as a validation set. As the alternating of committee members at each round, the validation set changes as well. In this setting, k-fold cross-validation on FL achieved." [p. 5, Section: IV-B] “At the end of each round, a new committee is elected from the providers of validated updates. In decentralized training settings, this election significantly affects the performance of the global model, because the committee decides which local updates will be aggregated. ... Random election: new committee members are randomly selected from validated nodes. ... Election by score: the providers with top validation scores constitute the new committee.” ) [Examiner’s Note: the elected committee node performs validation on the locally trained models by treating their data as a validation set based on committee consensus mechanism. The elected committee broadly represents the selected leader.] Regarding Claim 7, Li in view of Zhang teaches the elements of claim 1 as outlined above, and further teaches: wherein, in the performing of the model consensus on all the training results according to the consensus algorithm, the model consensus is performed based on a test set configured by a client in a group configured according to a predetermined condition among the plurality of clients. (Chen, [P. 4, Section: III-B] “It is noteworthy that the update validation is a pivotal component of the CCM, therefore, we describe a feasible approach: the committee members validate the local updates by treating their data as a validation set, and the validation accuracy becomes the score. This is the minimized approach that acquires no further operation of the committee, but only the basic ability to run the learning model. After combining the scores from the various committee members, the median will become the score of this update. ... the local data of the committee are taken as a validation set. As the alternating of committee members at each round, the validation set changes as well. In this setting, k-fold cross-validation on FL achieved.” [p. 5, Section: IV-B] “At the end of each round, a new committee is elected from the providers of validated updates. In decentralized training settings, this election significantly affects the performance of the global model, because the committee decides which local updates will be aggregated.”) Regarding Claim 9, The claim recites substantially similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Claim 1 is directed to a method, and claim 9 is directed to a device. Li in view of Zhang also discloses a BFLC blockchain system including storage for decentralized local training on participant devices. Regarding Claim 10, The claim recites substantially similar limitations as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Regarding Claim 13, The claim recites substantially similar limitations as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding Claim 15, The claim recites substantially similar limitations as corresponding claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale. Claim(s) 3 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Zhang as described above, and further in view of Zhao et al., (NPL: “Privacy-Preserving Blockchain-Based Federated Learning for IoT Devices.” (2021)). Regarding Claim 3, Li in view of Zhang teaches the elements of claim 1 as outlined above, and further teaches: While Li in view of Zhang teaches the model training and update for each round including a blockchain headers of the participating nodes. Li in view of Zhang does not appear to explicitly suggest: wherein the generating of the candidate model by performing the model includes specifying a hash value for the i−1-th model consented lastly through a hash function. However, it would have been obvious in view of Zhao. Hereinafter, Zhao, in combination with Li, and Zhang, teaches: wherein the generating of the candidate model by performing the model includes specifying a hash value for the i−1-th model consented lastly through a hash function. (Zhao, [p.2, Section: I & II] “After training, customers sign on hashes of encrypted models with their private keys and transmit locally trained models to the blockchain. Selected miners verify identities of senders, download models and calculate the average of all model parameters to obtain the global model. One miner, selected as the temporary leader, encrypts and uploads the global model to the blockchain. ... The blockchain is a chain of blocks that contain the hash of the previous block, transaction information, and a timestamp. ... We implement the off-chain storage by using IPFS, and store hashes of data locations on the blockchain instead of actual files. The hash can be used to locate the exact file across the system.”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Li and Zhao before them, to incorporate the method/system for Privacy-Preserving Blockchain-Based Federated Learning as taught by Zhao. One would have been motivated to make such a combination in order to protect the privacy of the extracted features. Thus, the system keeps the participating customers’ data confidential. Furthermore, the trained model is encrypted and signed by the sender to prevent the attackers and imposters from stealing the model or deriving original data through reverse-engineering (Zhao [Section: V-A]). Regarding Claim 11, The claim recites substantially similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Claim(s) 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Zhang as described above, and further in view of Lee et al., (Pub. No.: US 20210152471 A1). Regarding Claim 4, Li in view of Zhang teaches the elements of claim 1 as outlined above. While Li in view of Zhang teaches broadcasting local training/updates to participating nodes, Li in view of Zhang does not appear to explicitly teach: transmitting other training results received from the at least one other client to other clients that have not received at least one of all the training results. However, Lee, in combination with Li, and Zhang, teaches: transmitting other training results received from the at least one other client to other clients that have not received at least one of all the training results. (Lee, [0060]- [0063] “As shown in FIG. 1, a plurality of relay nodes 100 are connected to one or more blockchain pear nodes 200. Alternatively, a relay node 100 may be connected to another relay node to constitute a network. Also, each of the blockchain peer nodes 200 is connected to one relay node 100. In this case, the relay network shown in FIG. 1 is only one of several examples, and in an actual network, a larger number of relay nodes 100 may be connected to the blockchain peer node 200. That is, one relay node may be connected to two or more blockchain peer nodes and may be connected to two or more relay nodes.” [0088]-[0089] “the block packet processing unit 170 transmits a block packet to a blockchain peer node 200 and the relay node 100′ connected to the relay node 100 through the packet transcribing unit 130. when the received packet is a block packet, the block packet processing unit 170 may transmit the block packet to the blockchain peer node 200 and the relay node 100′ connected to the relay node 100 without changing the block packet.”) Therefore, at the effective filing date, it would have been prima facie obvious to one of ordinary skill in the art to modify the combination of Li and Zhang to incorporate the relay network system of Lee. One would have been motivated to make such a combination in order to maintain the security of the blockchain while improving the TPS, which is the representative performance of the blockchain, it must be preceded by shortening the block propagation time (Lee [0052]). Regarding Claim 12, The claim recites substantially similar limitations as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Claim(s) 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Zhang as described above, further in view of Rengers et al., (NPL: “Don't mine, wait in line: fair and efficient blockchain consensus with robust round Robin." (2018)), and further in view of Zhou et al., (Pub. No.: US 20220188775 A1). Regarding Claim 6, Li in view of Zhang teaches the elements of claim 5 as outlined above: Li in view of Zhang further teaches: selecting any one of the plurality of clients as the leader ...; configuring data existing in a client as a test set according to being selected as the leader; checking accuracy of all candidate models based on the test set; and performing the model consensus with a candidate model with highest accuracy. (Li, [p.2, Section: II] “Considering the vast amount of learning nodes in the FL settings, a broadcasting consensus is highly time-consuming. Therefore, reducing the consensus cost is non-trivial. One [10] of the related works selects a leader to execute the consensus.” [p. 4, Section: III-B] “It is noteworthy that the update validation is a pivotal component of the CCM, therefore, we describe a feasible approach: the committee members validate the local updates by treating their data as a validation set, and the validation accuracy becomes the score. ... he local data of the committee are taken as a validation set. As the alternating of committee members at each round, the validation set changes as well. In this setting, k-fold cross-validation on FL achieved.” [p. 4, Section: III-C] “As aforementioned, a certain number of valid updates are required for each round. Therefore, when the committee validates enough local updates, the aggregation process is activated. These validated updates are aggregated by the committee into a new global model. The aggregation can be performed on the local gradients [11] or the local models [12], and the network transmission consumptions of these two methods are equal. After the new global model is packed on the blockchain, the committee will be elected again, and the next training round begins.” [p. 5, Section: IV-B] “At the end of each round, a new committee is elected from the providers of validated updates. In decentralized training settings, this election significantly affects the performance of the global model, because the committee decides which local updates will be aggregated.”) checking accuracy of all candidate models based on the test set; and performing the model consensus with a candidate model with highest accuracy. (Zhang, [p. 3, Section: 2.3] “2.3.6 Model Evaluation and Aggregation. Each validator in the validators committee retrieves the validation dataset and local model updates using the file handles and encrypted keys. The validator first evaluates each local model 𝑤𝑖 on the validation dataset 𝐷 by computing L (𝐷;𝑤𝑖) where L is the loss function. To prevent corrupt local model updates, only qualified local model updates are accepted. ... Next, the validator calculates the worker’s contributions in terms of marginal model performance loss. ... Next, the validator ranks the workers according to their contributions.”) While Li describes the model consensus by electing a committee and using validation dataset to validate the locally trained model performance to obtain global model update. Zhang further describes the validator worker retrieves validation dataset to evaluate each model to determine qualified local model updates for updating the global model. Li in view of Zhang does not appear to explicitly suggest that the leader being selected according to round-robin method and that a candidate model with highest accuracy is selected as the consented model. However, Rengers, in combination with Li and Zhang, teaches: selecting any one of the plurality of clients as the leader according to a round robin method; (Rengers, [P. 2, Section: 1] “New consensus scheme. We propose Robust Round Robin, a novel consensus scheme, where deterministic leader candidate selection is complemented with a lightweight interactive endorsement protocol. The main benefits of our approach are efficiency and fairness.” [P. 5, Section: 3.4] “we complement the simple and deterministic roundrobin selection with a lightweight leader endorsement mechanism. On each round, a small set of oldest identities are chosen as leader candidates. ... In this protocol, a candidate proposes a block and the endorsers confirm the block from the oldest candidate they observe. The leader candidate that receives the required quorum of q confirmations from the endorsers, is chosen as the leader to extend the chain with a new block.”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Li, Zhang, and Rengers before them, to incorporate the leader selection consensus scheme using round robin as taught by Rengers. One would have been motivated to make such a combination in order to ensure fair leader selection without bias and to achieve both efficiency and security (Rengers [Abstract]). The combination of Li, Zhang, and Rengers does not appear to explicitly teach a candidate model with highest accuracy is selected as the consented model. However, Zhou, in combination with Li, Zhang, and Rengers, teaches: performing the model consensus with a candidate model with highest accuracy. (Zhou, [0033] “the term “model selection at a local site” refers to an algorithm used to select a local model to use at the local site, where the pool of the candidate models comes from either the global model for a cohort and/or local models from other sites.” [0044] “Various criteria may be used for selecting a local model from the local model pool, such as the similarity of the site providing the local model to the local site 300, the proximity of the site providing the local model to the local site 300, the performance metric of the local model, or the like. ... Data can be applied to the two selected models, and the performance can be analyzed. The best performing model can be selected as the new local model 306 for the given cohort at the local site 300.” [0051] “Referring to FIG. 8, the process 800 can include an act 810 where a local site can select a local model from a pool of local models. At act 830, the local site can select the best model as the new model based on performance of the two selected models.”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Li, Zhang, and Rengers before them, to incorporate the local model selection as taught by Zhou. One would have been motivated to make such a combination in order to improve the accuracy of asset failure prediction models by sharing information among different sites without compromising privacy and security (Zhou [0013]). Regarding Claim 14, The claim recites substantially similar limitations as corresponding claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Claim(s) 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Zhang as described above, and further in view of Lu et al., (IDS: “Blockchain and Federated Learning for Privacy-Preserved Data Sharing in Industrial IoT." (2019)). Regarding Claim 8, Li in view of Zhang teaches the elements of claim 1 as outlined above. While Li in view of Zhang teaches committee members performing consensus mechanism to validate the local models using validation dataset and determining the qualified updates that will be packed onto the blockchain based on validation accuracy. While Li in view of Zhang is salient on whether the validation data used to evaluate the accuracy of the trained models includes noise. However, Lu, in combination with Li, and Zhang, teaches: wherein, in the performing of the model consensus on all the training results according to the consensus algorithm, a group including a client proposing a candidate model among the plurality of clients is configured, the test set including noise by each of the clients in the group is configured, and as accuracy of all candidate models is calculated based on the test set and shared with clients in the group, the model consensus is performed based on the accuracy. (Lu, [P. 6, Section: III-D-1] “To protect data privacy during multiparty decentralized learning, we incorporate a differential privacy preserved mechanism into federated learning. ... Since the local model will be shared to other participants, to protect the privacy of Vecg i, we incorporate differential privacy in the learning phase to train a mˆi from noised data. Then, Pi will send model mˆi to other participants. Once mˆi is received,Pi+1 will train a new local data model mˆi+1 based on received mˆi and its local data, then broadcast mˆi+1 to other participants. The data models are trained iteratively among participants. ... Differential private local model training: The noise calibrated by sensitivity s is added to local data Veci. The local data model mˆi is trained locally at Pi, by using machine learning algorithm on the selected noisy data Veci. 3) Collaborative multiparty learning: The Laplace mechanism is applied on local data model mi to achieve differential privacy mˆi = mi + Laplace(s/) (4) where s is the value of sensitivity, as shown in (Eq (5)). Then, the noise-added model mˆi is broadcasted as a transaction of the blockchain to other participants for federated learning.” [P. 7, Section: III-D-2] “Since each committee node trains a local data model, the quality of the model should be verified and measured during the consensus process. We leverage prediction accuracy to quantify the performance of the trained local model. More specifically, in the classification during training, the accuracy is denoted by the fraction of correctly classified records. While in the task of regression, the accuracy is measured by mean absolute error (MAE) ... where f(xi) is the prediction value of model mi and yi is the real value of the records. The lower the MAE of model mi is, the higher the accuracy of mi will be. ... During responding to a data sharing request, a committee node Pi transmits its trained model mi to the next committee node. The transmissions are recorded as model transactions t m i , together with its MAE(mi). ... A committee node Pj collects all model transactions and stores them locally as candidate blocks. ... Each verifying node calculates the MAE(mi) for each model transaction and MAE(M). If the calculated MAE is within a certain range, an approval will be sent to the leader. If the block containing all transactions is approved by every committee node, the leader will send the block data signed with its signature to all nodes.”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Li, Zhang, and Lu, to incorporate the privacy-preserving data sharing mechanism as taught by Lu. One would have been motivated to improve and enable secure data sharing with high efficiency and utility in the consensus protocol (Lu [Section: V]). Regarding Claim 16, The claim recites substantially similar limitations as corresponding claim 8 and is rejected for similar reasons as claim 8 using similar teachings and rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: (Pub. No.: US 20220114475 A1) – “Rui Zhu” relates to “Methods and systems for decentralized federated learning.” (Pub. No.: US 20220044162 A1) – “Qiong Zhang” relates to “Blockchain-based secure federated learning.” NPL: Gilad, Yossi, et al. "Algorand: Scaling byzantine agreements for cryptocurrencies." (2017). NPL: Kim, Woocheol, and Hyuk Lim. "FedCC: federated learning with consensus confirmation for Byzantine attack resistance (student abstract)." (2022). NPL: Tao, Wenqi, and Qifang Yin. "Candidate models for federated learning with blockchain." (2022). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SADIK ALSHAHARI whose telephone number is (703)756-4749. The examiner can normally be reached Monday - Friday, 9 a.m. 6 p.m. ET. Examiner interviews are available via telephone, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached on (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.A.A./Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Jan 26, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596930
SENSOR COMPENSATION USING BACKPROPAGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12493786
Visual Analytics System to Assess, Understand, and Improve Deep Neural Networks
2y 5m to grant Granted Dec 09, 2025
Patent 12462199
ADAPTIVE FILTER BASED LEARNING MODEL FOR TIME SERIES SENSOR SIGNAL CLASSIFICATION ON EDGE DEVICES
2y 5m to grant Granted Nov 04, 2025
Patent 12437199
Activation Compression Method for Deep Learning Acceleration
2y 5m to grant Granted Oct 07, 2025
Patent 12430552
Processing Data Batches in a Multi-Layer Network
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
35%
Grant Probability
82%
With Interview (+47.1%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month