DETAILED ACTION
This nonfinal action is in response to the amendment and remarks filed 01/05/2026 for application 17/582,873.
Claims 1-3, 6, 8, 9, and 13 have been amended.
Claims 1-14 remain pending in the application. Claims 1 and 8 are independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous office action mailed 11/13/2025 has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed 1/05/2026 has been entered.
Response to Amendment
The amendment filed 01/05/2026 has been entered.
Applicant’s amendment to the claims with respect to resolving claim objections has been considered, and the objections set forth in the office action mailed 11/13/2025 are consequently withdrawn.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2 and 9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 2, it recites the limitation “receiving first hash data of the federated learning parameter stored in the hardware secure architecture of the electronic device”. However, parent claim 1 does not previously recite the “federated learning parameter” as being stored in the hardware secure architecture – while the claim does previously recite “receiving federated learning secure data stored in a hardware secure architecture”, the “federated learning parameter” is not expressly recited as being included within the “federated learning secure data”, and is instead only previously recited as being included within “federated learning data”. It is thereby unclear whether the “federated learning parameter stored in the hardware secure architecture” recited in claim 2 is referring to the same “federated learning parameter” recited in parent claim 1, or is instead referring to an entirely separate “federated learning parameter” included within the “hardware secure architecture”. Consequently, one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
For purposes of examination and as best understood in light of the specification, the limitation “receiving first hash data of the federated learning parameter stored in the hardware secure architecture of the electronic device” is interpreted as reciting a separate parameter, i.e., “receiving first hash data of a second federated learning parameter stored in the hardware secure architecture”.
Regarding claim 9, it has the same deficiencies as those found in claim 2 above. Consequently, it is rejected for the same reasons and likewise interpreted as detailed above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7-10, and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Sheller et al., (Pub. No. US 20190042937 A1, “Methods and Apparatus for Federated Training of a Neural Network Using Trusted Edge Devices”, published 02/07/2019), hereinafter Sheller, in view of Nadeau et al., (Pub. No. US 20190268163 A1, “Secret Sharing Via Blockchain Distribution”, published 08/29/2019), hereinafter Nadeau, and Sly et al. (Pub. No. US 20210328801 A1, “Systems And Methods To Verify Identity Of An Authenticated User Using A Digital Health Passport”, effectively filed 04/21/2020), hereinafter Sly.
Nadeau was cited in the previous office action mailed 05/15/2025.
Regarding claim 1, Sheller teaches A method, performed by a server, of performing federated learning with an electronic device, ("Federated Learning enables a model representing a neural network to be trained using data across many edge systems without having to centralize the data used for such training. Edge devices perform local training, and provide training results to an aggregator device, which aggregates the training results among the multiple edge devices to update a centralized model, which can then be re-distributed to the edge devices for subsequent training and/or use" [Sheller ¶ 0015]; "In examples disclosed herein, the aggregator device 110 is implemented by a server" [Sheller ¶ 0021]; The example edge device(s) 130, 135, 137 of the illustrated example of FIG. 1 is implemented by a computing platform such as, for example an Internet of Things (IoT) device, a smartphone, a personal computer, etc. [Sheller ¶ 0023]) the method comprising:
transmitting a data request, to the electronic device that requests transmission of a federated learning parameter used to refine a core artificial intelligence model built in the server ("FIG. 2 is a block diagram of an example implementation of the example aggregator device 110 of FIG. 1. The example aggregator device 110 includes model update receiver 210, and model updater 230, central model data store 240, a model provider 250, and a training data instructor 260" [Sheller ¶ 0027]; "The example model provider 250 provides the current state of the machine learning model out to each edge device. In some examples, the model provider 250 provides additional instructions that accompany the model such as, for example, threshold values that are to be used by the edge device when training the model, processing queries against the model, and/or providing updates to the model to the aggregator device 110" [Sheller ¶ 0032]),
receiving, from the electronic device, federated learning data including the federated learning parameter; (“FIG. 3 is a block diagram of an example implementation of the example edge device 130 of FIG. 1. The example edge device 130 includes a model receiver 305, a local model data store 310, a neural network processor 315, a neural network trainer 320, a local data throttler 325, a model update provider 330, a local data accesser 335, a hash ledger 337, a query handler 340, an input scanner 345, a query ledger 350, trusted input hardware 360 and a local data provider 370” [Sheller ¶ 0034]; "The example model update provider 330 provides a model update to the example aggregator 110. In some examples, additional information is provided along with the model update such as, for example, an identity of the edge device 130, an indication of how much training data was used to prepare the model update, and/or other parameters identified as part of the model training process" [Sheller ¶ 0044]; "The example model update receiver 210 receives model updates from the edge devices 130, 135, 137. The example model update receiver 210 of the example aggregator device 110 aggregates the model updates provided by the edge devices 130, 135, 137" [Sheller ¶ 0028])
identifying whether a result of federated learning performed by the electronic device is trustable, based on the federated learning data; ("In examples disclosed herein, the edge devices may implement trusted execution environments 132 such that the model updates from those edge devices may be trusted. However, in some examples the edge devices 130, 135, 137 might not implement trusted execution environments and, updates from those edge devices might not be trusted. When a model update is not trusted, additional checks are implemented to ensure that the model update does not maliciously affect the central model stored in the central model data store 240" [Sheller ¶ 0028]) and
refining the core artificial intelligence model, based on a result of the identifying, (“The example model update receiver 210 of the example aggregator device 110 accesses the results provided by the edge devices 130, 137. (Block 440). In some examples, the model updates are aggregated as they arrive at the aggregator 110 (e.g., in a streaming average). In some examples, Byzantine Gradient Descent is used to exclude extreme model update results. In the illustrated example of FIG. 4, the example model update receiver 210 aggregates model updates from trusted edge devices. (Block 443). That is, if a model update is received from a trusted edge device (e.g., an edge device that implements a trusted execution environment), it is automatically included in the aggregation. The example model update receiver 210, applies Byzantine Gradient Descent to model updates that originate from non-trusted edge devices. (Block 445), Applying Byzantine Gradient Descent to model updates originating from non-trusted edge devices enables elimination of extreme model updates (which may potentially be malicious” [Sheller ¶ 0061])
wherein the receiving of the federated learning data comprises receiving federated learning secure data stored in a hardware secure architecture of the electronic device, ("The illustrated example of FIG. 1 includes an aggregator device 110, a network 120, and edge devices 130, 135, 137. Example approaches disclosed herein utilize a Trusted Execution Environment (TEE) implemented at either the edge device(s) 130, 135, 137 and/or the aggregator device 110. In the illustrated example of FIG. 1, the aggregator device 110 includes a TEE 112, and the example edge device 130 includes a TEE 132. In examples disclosed herein, a TEE is a secure area within a main processor of the device. However, any other approach to implementing a TEE may additionally or alternatively be used such as, for example, a dedicated secure processor." [Sheller ¶ 0018])
wherein the identifying of whether the result of the federated learning is trustable comprises identifying whether the result of the federated learning is trustable, based on the federated learning secure data (“Moreover, TEEs may be implemented only in some of the edge devices. When aggregating training results from the edge devices 130, 135, 137, the example aggregator device 110 may incorporate results differently depending on whether the results were provided by an edge device that implemented a TEE or not. For example, the example aggregator may apply Byzantine Gradient Descent (BGD) to training results provided by edge devices that do not implement a TEE to ensure that extreme edge training results are ignored" [Sheller ¶ 0020]).
However, Sheller does not expressly teach the federated learning secure data compris[ing] a first message authentication code generated by the electronic device by using a predetermined algorithm, or comparing the first message authentication code to a second message authentication code obtained by the server by using a predetermined algorithm, wherein the second message authentication code is generated by the server by applying the predetermined algorithm to the federated learning data associated with the federated learning parameter for integrity verification, and wherein the result of the identifying indicates whether the integrity of the federated learning parameter is verified based on a comparison between the first message authentication code and the second message authentication code.
In the same field of endeavor, Nadeau teaches a federated learning framework (“FIG. 2 illustrates federated learning. Here the learning model 20 may be improved based on usage reported by many different mobile devices 30. While the number of mobile devices may be hundreds, thousands, or even millions, FIG. 2 simply illustrates four (4) mobile devices 30a-d. That is, all the mobile devices 30a-d (again illustrated as smartphones 32a-d) execute the learning model 20. Each smartphone 32a-d randomly, periodically, or on command sends a local update 50a-d via a communications network 52 to a server 54. The local update 50a-d may merely summarize a local change 56a-d to the learning model 20. The local change 56a-d may be based on the raw electronic data 22a-d gathered by, or processed by, the learning model 20…Regardless, the server 54 may use the local updates 50a-d to improve the learning model 20. Indeed, the server 54 may aggregate the local updates 50a-d to generate a learning modification 60 to the learning model 20” [Nadeau ¶ 0021]) that receives federated learning secure data comprising a first message authentication code generated by an electronic device by using a predetermined algorithm (“The local update 50a-d may merely summarize a local change 56a-d to the learning model 20. The local change 56a-d may be based on the raw electronic data 22a-d gathered by, or processed by, the learning model 20…The local update 50a-d may be a file that includes or specifies an alphanumeric device identifier 58a-d that uniquely identifies the corresponding mobile device 30a-d” [Nadeau ¶ 0021]; “Exemplary embodiments may perform cryptographic comparisons to discern data differences. That is, the server 54 may retrieve the cryptographic hash value(s) 78 generated from hashing the original version 130 of the local update 50” [Nadeau ¶ 0045]; “The blockchain 70 may thus store original versions of any data described by the local update 50 and/or used to train or improve the learning model 20. The original versions of the data may be raw and unencrypted, encrypted, and/or hashed (using the hashing algorithm 76 above discussed). Indeed, the cryptographic hash values 78 may be used to validate the original versions of the data. The mobile device 30 may even store and execute trusted platform modules to sign the electronic data 22, thus proving that the mobile device 30, and only the mobile device 30, generated the electronic data 22” [Nadeau ¶ 0036]; “The noun identifier 90, as previously explained, uniquely sources the mobile device 30 (e.g., the device identifier 58), the learning model (e.g., the model identifier 98), and/or the current user (e.g., the user identifier 100) (as explained with reference to FIG. 7). Exemplary embodiments may thus cryptographically hash the noun identifier 90 (using the hashing algorithm 76) to cryptographically bind any change to the learning model 20. For example, exemplary embodiments may use a trusted platform module 150 to securely generate the hash values 78 and to limit or specify permitted usage. The local update 50, for example, may thus be digitally and cryptographically signed and added to the blockchain 70, thus later proving that the mobile device 30, and only the mobile device 30, generated the local update 50” [Nadeau ¶ 0049]; The server may receive hashes of the original version of the local update (i.e., federated learning secure data), wherein the local update includes a noun identifier, and device identifier (i.e., message authentication code) therein, that was cryptographically hashed via the trusted platform module of a mobile device (i.e., generated within hardware secure architecture of electronic device by using a hash function (i.e., predetermined algorithm))
comparing the first message authentication code to a second message authentication code obtained by the server by using the predetermined algorithm (“Here exemplary embodiments may cryptographically hash the noun identifier 90 as verification of originality. The noun identifier 90, as previously explained, uniquely sources the mobile device 30 (e.g., the device identifier 58), the learning model (e.g., the model identifier 98), and/or the current user (e.g., the user identifier 100) (as explained with reference to FIG. 7)” [Nadeau ¶ 0049] “Exemplary embodiments may verify, or deny, originality. Exemplary embodiments may perform cryptographic comparisons to discern data differences. That is, the server 54 may retrieve the cryptographic hash value(s) 78 generated from hashing the original version 130 of the local update 50. The server 54 may also retrieve and hash the current version 132 of the local update 50 (using the same cryptographic hashing algorithm 76) to generate one or more verification hash values 136.” [Nadeau ¶ 0045]; The server utilizes the same cryptographic hashing algorithm (i.e., current update) on the current version of the local update (i.e., federated learning data) to generate verification hash values, i.e., second message authentication code comprising hashed versions of identifiers used to verify originality)
wherein the second message authentication code is generated by the server by applying the predetermined algorithm to the federated learning data associated with the federated learning parameter for integrity verification, ([Nadeau ¶ 0045] as detailed above; The server utilizes the same cryptographic hashing algorithm (i.e., current update) on the current version of the local update (i.e., federated learning data, including learning parameters) to generate verification hash values) and
wherein the result of the identifying indicates whether integrity of the federated learning parameter is verified based on a comparison between the first message authentication code and the second message authentication code (“If the verification hash values 136 match the cryptographic hash values 78 generated from the original version 130 of the local update 50, then the local update 50 has not changed since the date and time of creation 134. That is, the current version 132 of the local update 50 is the same as the original version 130, unaltered, and thus authentic 138. However, if the verification hash values 136 (generated from hashing the current version 132 of the local update 50) fail to match the cryptographic hash values 78 generated from the original version 130 of the local update 50, then the current version 132 has changed since the date and time of creation 134. Exemplary embodiments, in other words, reveal an alteration that may indicate the current version 132 is inauthentic 140. Exemplary embodiments may thus generate a flag or other alert 142 to initiate further investigation” [Nadeau ¶ 0045])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the federated learning secure data compris[ing] a first message authentication code generated by the electronic device by using a predetermined algorithm, or comparing the first message authentication code to a second message authentication code obtained by the server by using a predetermined algorithm, wherein the second message authentication code is generated by the server by applying the predetermined algorithm to the federated learning data associated with the federated learning parameter for integrity verification, and wherein the result of the identifying indicates whether the integrity of the federated learning parameter is verified based on a comparison between the first message authentication code and the second message authentication code as taught by Nadeau into Sheller because they are both directed towards federated learning frameworks. Incorporating the cryptographic hash comparison method taught by Nadeau would improve the system by providing an additional means of ensuring trustworthiness of edge devices and filtering out possible poisoning attacks when the source of a model update is unknown. Sheller already teaches means of filtering out possible poisoning attacks via applying Byzantine Gradient Descent to model updates, and also throttling the aggregation of updates (e.g., only allowing a non-trusted edge device to provide an update to the central model once every N iterations) [Sheller ¶ 0061], yet also acknowledges the existence of poisoning attacks that can circumvent Byzantine Gradient Descent [Sheller ¶ 0073]. As such, a person of ordinary skill in the art would recognize the value of implementing further means (i.e., additional checks [Sheller ¶ 0028]) of avoiding poisoning attacks and flagging potentially malicious updates/devices ("Exemplary embodiments may thus generate a flag or other alert 142 to initiate further investigation" [Nadeau ¶ 0045]), given the known negative impact that a significant number of malicious updates can have on model robustness ("The higher the constant, the greater the negative impact that the algorithm has on model convergence. If there are too many malicious updates, the aggregator cannot assure robustness" [Sheller ¶ 0016]).
However, the combination does not expressly teach the [transmitted] data request comprising a secure key of the server, or generating messages by using a predetermined algorithm based on the secure key of the server.
In the same field of endeavor, Sly teaches means of enabling secure communication between a centralized server and client devices for machine learning applications (“During the registration process, the characteristic identity vector generator 133 can process a range of facial images and sound samples obtained from the worker to generate a characteristic identity vector, against which the worker can be authenticated. Trained machine learning models such as convolutional neural networks (CNNs) can be applied to a plurality of pre-processed biometric inputs such as face images and voice samples to generate a plurality of feature vectors for respective face images and voice samples” [Sly ¶ 0066]; “The biometric identifier of the authenticated user can be generated by feeding a plurality of non-deterministic biometric inputs to a trained machine learning model to produce a plurality of feature vectors” [Sly ¶ 0193]; ‘The worker's device can request a shared encryption key from the registration server to cryptographically encrypt the worker's identification data such as biometric identifier or other personal identification data such as a photograph of the worker. The system can use a shared key in addition to the public encryption of the registration server to provide additional security when sending data to the registration server 111” [Sly ¶ 0078]) wherein the data transmitted to the electronic device comprises a secure key of the server (“The registration server can call a key management service (KMS) 333 to generate the new shared key (337). The KMS can generate a new shared key (339) send the new shared key from the vault to the registration server (341). The registration server can save the shared key to the database 158 along with the UUID for use in decrypting data received from the worker's device (343). The registration server 111 sends the shared key to the worker's device” [Sly ¶ 0079]),
and the server and electronic device generate messages by using a predetermined algorithm based on the secure key of the server (“The worker's device then sends a message “register identity” 547 to registration server. The message can include the device ID, UUID, public key of the worker's device referred to as “client_pub” or “A” and a hash of the shared key referred to as “hash(shared_key). The registration server computes shared key and compares it with the hash of shared key received from the worker's device (549). The registration server sends a message 551 to the worker's device. The message includes the generator (g), the prime number (p), public computed key of server (server_pub or B) and a hash of the shared key generated by the registration server. The worker's device can send an acknowledgement (or ACK) message to registration server with hash of shared key and disconnect the session (553). At this point, both worker's device and registration server have established a shared key without exchanging the shared key with each other. This shared key can then be used to decrypt any encrypted communication between the registration server and the worker's device” [Sly ¶ 0093]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated data transmitted to the electronic device comprising a secure key of the server, and generating messages by using a predetermined algorithm based on the secure key of the server.as taught by Sly into Sheller and Nadeau because they are all directed towards enabling secure communication between a centralized server and client devices for machine learning applications. Given that Sheller already discloses identification of trusted edge devices through implementation of trusted execution environments (TEE) (“…if a model update is received from a trusted edge device (e.g., an edge device that implements a trusted execution environment), it is automatically included in the aggregation” [Sheller ¶ 0061]), wherein TEEs may serve as root points from which to establish secure channels of communication (“Many TEEs provide roots of trust from which to establish secure channels. If the model queries are originating from known, trusted endpoints, then the system can know a priori whether these queries are possibly part of a reverse engineering attack” [Sheller ¶ 0078]), and Nadeau also discloses implementation of trusted platform modules of mobile devices as being known in the art [Nadeau ¶ 0049], a person of ordinary skill in the art would recognize the value of incorporating the teachings of Sly to further enable efficient building of said secure channels of communication between trusted edge devices and the main server (“The process of encryption and decryption by two separate keys makes the process slower as compared to use of one shared key. A commonly used asymmetric encryption technique is RSA. The technology disclosed can use both shared and public/private keys to provide a secure communication between worker's device, the registration server, and the verification device” [Sly ¶ 0077]), thereby preventing an attacker from intercepting the transmission of model updates.
Regarding claim 2, the combination of Sheller, Nadeau, and Sly teaches the limitations of parent claim 1, and Nadeau further teaches wherein the receiving of the federated learning secure data comprises receiving first hash data of the federated learning parameter stored in the hardware secure architecture of the electronic device, ([Nadeau ¶ 0021, 0045, 0036, 0049] as detailed in claim 1 above; The server may receive hashes of the original version of the local update (i.e., federated learning secure data, including learning parameters), wherein the local update includes a noun identifier, and device identifier (i.e., message authentication code) therein, that was cryptographically hashed via the trusted platform module of a mobile device (i.e., generated within hardware secure architecture of electronic device by using a hash function)) and
the identifying of whether the result of the federated learning is trustable comprises:
obtaining second hash data from the federated learning parameter received from the electronic device; ([Nadeau ¶ 0049, 0045] as detailed in claim 1 above; The server utilizes the same cryptographic hashing algorithm (i.e., current update) on the current version of the local update (i.e., federated learning data received from device, including learning paramters) to generate verification hash values, i.e., second hash data) and
identifying an integrity of the result of the federated learning by comparing the first hash data to the second hash data, (“If the verification hash values 136 match the cryptographic hash values 78 generated from the original version 130 of the local update 50, then the local update 50 has not changed since the date and time of creation 134. That is, the current version 132 of the local update 50 is the same as the original version 130, unaltered, and thus authentic 138. However, if the verification hash values 136 (generated from hashing the current version 132 of the local update 50) fail to match the cryptographic hash values 78 generated from the original version 130 of the local update 50, then the current version 132 has changed since the date and time of creation 134. Exemplary embodiments, in other words, reveal an alteration that may indicate the current version 132 is inauthentic 140. Exemplary embodiments may thus generate a flag or other alert 142 to initiate further investigation” [Nadeau ¶ 0045])
wherein the first hash data constitutes or is derived from at least a portion of the first message authentication code ([Nadeau ¶ 0021, 0049]; The hash of the local update (i.e., first hash data) can include the device identifier (i.e., authentication code)).
Regarding claim 3, the combination of Sheller, Nadeau, and Sly teaches the limitations of parent claim 1, and Sheller further teaches wherein the receiving of the federated learning secure data comprises receiving, by the server, the federated learning secure data including federated learning performance information about a result of performing training on an artificial intelligence model built in the electronic device ("The example model update provider 330 provides a model update to the example aggregator 110. In some examples, additional information is provided along with the model update such as, for example, an identity of the edge device 130, an indication of how much training data was used to prepare the model update, and/or other parameters identified as part of the model training process" [Sheller ¶ 0044]; In light of the instant specification, the examiner has interpreted “performance information” as including any kind of data/information related to the training of the models in edge devices (e.g., the training data and/or relative amount of training data used), and
the identifying of whether the result of the federated learning is trustable comprises identifying whether the result of the federated learning is trustable, based on the federated learning performance information (“The example training data instructor 260 determines whether to allow edge devices to incorporate new training data in a given training round, and instructs the edge devices 130, 137 concerning the use of new training data for a given training round. In examples disclosed herein, the example training data instructor 260 allows new training data to be used every N rounds. However, any other approach to selecting when new training data will be allowed may additionally or alternatively be used. In examples disclosed herein, the determination of whether to allow new training data is made with respect to all edge devices. However, in some examples, the determination of whether to allow new training data may be made with respect to individual edge devices” [Sheller ¶ 0033]; “In some examples, the local data throttler 325 is instructed that new local data should be committed every N training rounds. In such an example, the local data throttler 325 determines whether N training rounds have elapsed since additional and/or new local model data was allowed to be used as part of the model training process. In some examples, the value for N is provided by the example aggregator device 110 when transmitting the model to the edge device 130. If the example local data throttler 325 determines that N rounds have not yet elapsed, new local data is not allowed to be incorporated in the training process. If the example update transmission throttler 325 determines that the at least N training rounds have elapsed since the new local data was last allowed to be incorporated in the model training process, the example local data throttler 325 enables the inclusion of new local data in the model training process” [Sheller ¶ 0043]).
Regarding claim 5, the combination of Sheller, Nadeau, and Sly teaches the limitations of parent claim 3, and Sheller further teaches wherein the federated learning performance information comprises an outlier detection value generated based on outlier detection being performed on training data used by the electronic device to train the artificial intelligence model built in the electronic device, ("While in the illustrated example of FIG. 5, hashes are used to determine whether to allow training based on locally received training data, other techniques for determining whether to permit training may additionally or alternatively be used. For example, the training data may be compared to previously provided training data to determine a degree of similarity to the prior training data” [Sheller ¶ 0077]; Identifying whether the training (and associated training result) is permissible involves determining a degree of similarity (i.e., outlier detection value) of training data to prior data. In light of the instant specification, the examiner has interpreted performing “outlier detection” as identifying data that deviates significantly (i.e., is non-similar, therefore having a low degree of similarity) from the existing (i.e., prior) data) and
the identifying of whether the result of the federated learning is trustable comprises identifying a reliability degree of the training data used by the electronic device by comparing the outlier detection value to a certain value (“If the newly provided training data is similar to previously submitted training data, such similarity suggests that the training data is legitimate (as training data is not expected to widely vary from one training iteration to the next). On the contrary, if the training data is not similar to previously submitted training data, such non-similarity suggests that the training data may have been tampered with in an effort to maliciously impact the model" [Sheller ¶ 0077]; The degree of similarity (i.e., outlier detection) is analyzed to determine whether the training data is legitimate or malicious (i.e., reliability degree). Making a binary decision of whether data may be legitimate or malicious, based on a measured degree of similarity, inherently results in comparing said degree of similarity to an existing threshold (i.e., certain value); e.g., a similarity degree above threshold N indicates legitimate data, and below threshold N indicates malicious data).
Regarding claim 6, the combination of Sheller, Nadeau, and Sly teaches the limitations of parent claim 3, and Sheller further teaches wherein the federated learning performance information comprises federated learning identification information including identification information related to the federated learning performed by the electronic device (“"The example model update provider 330 provides a model update to the example aggregator 110. In some examples, additional information is provided along with the model update such as, for example, an identity of the edge device 130, an indication of how much training data was used to prepare the model update, and/or other parameters identified as part of the model training process" [Sheller ¶ 0044]). Nadeau further teaches the identifying of whether the result of the federated learning is trustable compris[ing] identifying whether the electronic device is trustable based on a comparison between first federated learning identification information received from the electronic device and second federated learning identification information pre-registered in the server ([Nadeau ¶ 0045] as detailed in claim 1 above; “Once the noun identifier 90 (e.g., the device identifier 58, the model identifier 98, and/or the user identifier 100, as above explained) is determined and/or retrieved, the noun identifier 90 may be hashed using the cryptographic hashing algorithm 76 (as above explained) to generate one or more cryptographic noun keys 170. The cryptographic noun key 170 may then incorporated into and/or distributed via the blockchain 70. Once any recipient receives the blockchain 70, the recipient may reverse lookup the noun key 170 to retrieve the corresponding noun identifier 90. For example, the recipient device 82 may send a key query to a database 172 of keys. FIG. 18 illustrates a key server 174 locally storing the database 172 of keys in local memory. The database 172 of keys converts or translates the noun key 170 back into its corresponding noun identifier 90. FIG. 19 illustrates the database 172 of keys is illustrated as a table that electronically maps, relates, or associates different cryptographic noun keys 170 to different noun identifiers 90. The key server 174 identifies the corresponding noun identifier 90 and sends a key response. The key response, for example, identifies the device identifier 58, the model identifier 98, and/or the user identifier 100 as a source of the local update 50. Exemplary embodiments may thus identify the mobile device 30, the learning model 20, and the user associated with the local update 50” [Nadeau ¶ 0052])
Regarding claim 7, the combination of Sheller, Nadeau, and Sly teaches the limitations of parent claim 1, and Sheller further teaches wherein the refining of the core artificial intelligence model comprises performing a protecting operation on the core artificial intelligence model, based on the result of the federated learning identified to be untrustable (“the example aggregator device 110 may incorporate results differently depending on whether the results were provided by an edge device that implemented a TEE or not. For example, the example aggregator may apply Byzantine Gradient Descent (BGD) to training results provided by edge devices that do not implement a TEE to ensure that extreme edge training results are ignored" [Sheller ¶ 0020]).
Regarding claims 8-10 and 12-14, they are machine/apparatus claims that correspond to the method of claims 1-3 and 5-7, which are already taught by the combination of Sheller, Nadeau, and Sly as detailed above. Sheller further teaches A server configured to perform federated learning with an electronic device, the server comprising: a communication interface comprising communication circuitry; memory storing one or more instructions; and at least one processor, comprising processing circuitry configured to execute the one or more instructions to: control the communication interface to transmit data to the electronic device and receive data from the electronic device ("FIG. 7 is a block diagram of an example processor platform 700 structured to execute the instructions of FIGS. 4 and/or 4A to implement the example aggregator device 110 of FIGS. 1 and/or 2...The processor platform 700 of the illustrated example includes a processor 712...The processor 712 of the illustrated example includes a local memory 713 (e.g., a cache)...The processor platform 700 of the illustrated example also includes an interface circuit 720...The interface circuit 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 726" [Sheller ¶ 0086-0092]). Consequently, claims 8-10 and 12-14 are rejected for the same reasons as claims 1-3 and 5-7.
Claims 4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Sheller, Nadeau, and Sly, as applied to claims 3 and 10 above, further in view of Imteaj et al., (“FedAR: Activity and Resource-Aware Federated Learning Model for Distributed Mobile Robots”, available IEEE conference December 2020), hereinafter Imteaj.
Regarding claim 4, the combination of Sheller, Nadeau, and Sly teaches the limitations of parent claim 3, and Sheller further teaches wherein the federated learning performance information comprises information of training time about a time taken by the electronic device to perform the training on the artificial intelligence model built in the electronic device (“FIG. 3 is a block diagram of an example implementation of the example edge device 130 of FIG. 1. The example edge device 130 includes…a model update provider 330,…a query handler 340,…a query ledger 350” [Sheller ¶ 0034]; "The example model update provider 330 provides a model update to the example aggregator 110. In some examples, additional information is provided along with the model update such as, for example, an identity of the edge device 130, an indication of how much training data was used to prepare the model update, and/or other parameters identified as part of the model training process" [Sheller ¶ 0044]; "In examples disclosed herein, the query handler 340 compares a timestamp representing a time at which the query was received against timestamp stored in the query ledger 350. In examples disclosed herein, the example query handler 340 determines that enough time is elapsed since a prior query when the smallest difference between the timestamp of the present query and any prior query stored in the example query ledger 350 is greater than a threshold amount of time. In examples disclosed herein the threshold amount of time is one query per second. However, any other threshold may additionally or alternatively be used" [Sheller ¶ 0080]).
However, the combination does not expressly teach [wherein] the identifying of whether the result of the federated learning is trustable comprises identifying whether the electronic device has trained the artificial intelligence model built in the electronic device, by performing outlier detection on the information of training time.
In the same field of endeavor, Imteaj teaches a federated learning framework ("This paper proposes an FL model by monitoring client activities and leveraging available local computing resources, particularly for resource-constrained IoT devices (e.g., mobile robots), to accelerate the learning process" [Imteaj Abstract]; "To tackle the above-mentioned issues, an alternative distributed ML paradigm named Federated Learning (FL) is proposed in [4] that performs on-device training based on client’s local data, pushes client’s training parameters at the edge, and learns from the global model...Further, there is a possibility of selecting a vulnerable FL client, specifically in an FL-based IoT (FL-IoT) environment, where the devices are more prone to susceptibility. Therefore, we need to monitor the clients’ available resources, behaviors, and contributions towards training a learning model" [Imteaj page 1 Introduction]) [wherein] the identifying of whether the result of the federated learning is trustable comprises identifying whether the electronic device has trained the artificial intelligence model built in the electronic device, by performing outlier detection on the information of training time ("We consider an FL client to be untrustworthy if the client infuses incorrect models or repeatedly gives slow responses" [Imteaj Abstract]; "If a local client’s model update performance lower than a specified threshold or a client repeatedly sent similar gradient updates, then the task publisher rejects the client update and does not update the global model. With the resource checking, unreliable client handle strategy, and malicious attacker detection approach, weak clients, and unreliable model updates are not considered during the learning process. The task publisher only considers the reliable local model update and performs a federated averaging strategy [24] to generate an updated global model" [Imteaj page 6 Evaluate Local Model Quality]; "Every participant client in a training round must submit their model within a given time because it is not feasible for the server to wait for a client for an infinite amount of time. The task publisher can set the threshold time for a task...when an FL client fails to accomplish its task on time, we set a penalty on that client's trust score (CPenalty with a trust score of -2). We check the past performance of that client, and if the client repeatedly failed to respond on time (20%-50% of its participation), then we add a CBlame trust value to that client's trust score (where CBlame = -8). In case the client’s straggling effect is observed above 50% of its overall participation, or if the client sends a model that has high deviation compare to the other client’s models, we add a CBan trust value to that client’s trust score (where CBlame = -16)" [Imteaj page 4 Calculate Trust of FL-client])
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated [wherein] the identifying of whether the result of the federated learning is trustable comprises identifying whether the electronic device has trained the artificial intelligence model built in the electronic device, by performing outlier detection on the information of training time as taught by Imteaj into the combination because both Sheller and Imteaj are directed towards federated learning frameworks. Incorporating the teachings of Imteaj would enable the system to evaluate client/edge devices not only on reliability, but also on efficiency, which would be beneficial for mitigating the effect of resource constraints and minimizing model convergence time ("In this paper, we assume that we have a resource-constrained FL environment, where a client may have a straggler effect, and any interested client may provide inappropriate model information during the training process. The main reason of straggler effect is the resource shortage issue that leads us to think about a strategy to avoid an unreliable or inefficient client during the learning process. Moreover, the existence of unreliable or inconsistent clients can prolong the convergence time and may create a negative impact on overall model accuracy" [Imteaj page 3 System Description]).
Regarding claims 11, it is a machine/apparatus claims that corresponds to the method of claim 4, which is already taught by the combination of Sheller, Nadeau, Sly, and Imteaj as detailed above. Consequently, it is rejected for the same reasons.
Response to Arguments
The remarks filed 01/05/2026 have been fully considered.
Applicant’s remarks traversing the obviousness rejections under 35 U.S.C. 103 set forth in the office action mailed 11/13/2025, in view of claims 1-14 as amended, have been considered, but are moot because the new grounds of rejection set forth above does not rely on the reference(s) applied in the prior rejection of record for the subject matter being specifically challenged in applicant' s argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY M BALAKRISHNAN whose telephone number is (571) 272-0455. The examiner can normally be reached 10am-5pm EST Mon-Thurs.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER WELCH can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/V.M.B./
Examiner, Art Unit 2143
/JENNIFER N WELCH/Supervisory Patent Examiner, Art Unit 2143