Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. This action is in response to the amendment filed February 20, 2026.
3. Claims 1-20 have been examined and are pending with this action.
Response to Arguments
4. Applicant’s arguments, filed February 20, 2026, with respect to claim 1, previously rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Williams et al. (US 2020/0204341 A1), herein referenced Williams, have been fully considered and are not persuasive.
The applicant(s) seem to be asserting that Williams fails to teach both user data and ML model to be encrypted using a fully homomorphic encryption technique. The examiner disagrees. Williams explicitly teaches such limitations.
For instance, Williams teaches in paragraph [0083], with reference to FIG. 6b, “FIG. 6b shows a neuron 600′ that can be in a secure neural network analytic. Security is provided by encrypting the neural network weights Wn 620 using a fully homomorphic encryption scheme, such as BFV or CKKS thereby generating E(Wn) 650. The homomorphic scheme used must support addition and multiplication operations using encrypted values. The encrypted weights Wn 650 are then multiplied with the unencrypted real values extracted from the instance Xn 610, producing encrypted values that are summed and fed into the activation function(s) 630 of the first layer of artificial neurons. If the activation function 630 is a polynomial function, it can be computed directly on the encrypted values; otherwise, it is replaced with a polynomial approximation function chosen in advance (see FIG. 7). The encrypted values output by the activation functions then move through the rest of the neural network in this way until they reach the designated output signals, producing an encrypted prediction, classification or other result 640′.”, emphasis added. Clearly such teachings explicitly teaches that the ML model is encrypted using a fully homomorphic encryption (FHE) scheme since encrypting Wn means that the model parameters are encrypted under FHE.
Williams teaches in paragraph [0008], “The method may include receiving, from a client, by at least one server from a client, at least one machine learning data structure. The at least one machine learning data structure can be encrypted using a homomorphic encryption scheme.”, and also teaches in paragraph [0077], “This module encrypts the ML analytic data structure using a homomorphic encryption scheme. In one embodiment, a fully homomorphic encryption scheme is used including but not limited to BFV (Brakerski/Fan-Vercauteren) and CKKS (Cheon-Kim-Kim-Song). Details of the homomorphic encryption of a trained neural network and decision tree data structures are described in more detail below”, emphasis added.
Williams explicitly teaches or in the very least suggests throughout the reference, that both the input data as well as the machine learning model can be encrypted with FHE. For at least these reasons above and the rejection set forth below, claims 1-20 remain rejected and pending.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
5. Claims 1-9 and 12-20 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Williams et al. (US 2020/0204341 A1).
INDEPENDENT:
As per claim 1, Williams teaches a system for classifying encrypted data using an encrypted machine learning (ML) model (see Williams, [0003]: “Specifically, the analytics include trained machine learning models, sent in a homomorphic encrypted scheme, and executed in an unsecure environment. Thereby, the encrypted analytic can be sent to an untrusted environment, be evaluated against data under the untrusted party's control, and generate an encrypted prediction, classification or other result which can be transmitted back to a trusted environment.”), the system comprising:
a memory storing a set of computer-readable instructions including an encrypted ML model (see Williams, [0013]: “The system may include at least one processor and a memory storing processor-executable codes, wherein the at least one processor can be configured to implement the operations of the above-mentioned method for performing secure analytics using homomorphic encryption”); and
a processor interfacing with the memory, and configured to execute the set of computer- readable instructions (see Williams, [0013]: “The system may include at least one processor and a memory storing processor-executable codes, wherein the at least one processor can be configured to implement the operations of the above-mentioned method for performing secure analytics using homomorphic encryption”) to cause the system to:
receive encrypted data from a user, wherein the encrypted data is encrypted in accordance with a first fully homomorphic encryption technique (see Williams, Abstract: “The method further includes generating, using the encryption scheme, at least one homomorphically encrypted data structure, and sending the encrypted data structure to at least one server.”; [0003]: “Specifically, the analytics include trained machine learning models, sent in a homomorphic encrypted scheme, and executed in an unsecure environment.”; [0009]: “In some embodiments, the homomorphic encryption scheme includes a fully homomorphic encryption scheme. The fully homomorphic encryption scheme may include at least one of a Brakerski/Fan-Vercauteren and a Cheon-Kim-Kim-Song cryptosystem.”; [0071]: “The server(s) 520 receives homomorphically encrypted data structures 516 associated with a trained ML analytic, and executed in the homomorphically encrypted scheme. Thus, information about the ML analytic is obfuscated from parties in the untrusted environment”; and [0078]: “The HED module 514 homomorphically encrypts the ML analytic data structure 516 which is transmitted to the server(s) 520.”),
analyze, by executing the encrypted ML model that is encrypted in accordance with a second fully homomorphic encryption technique, the encrypted data to output an encrypted classification without decrypting the encrypted data (see Williams, [0080]: “To protect previously mentioned aspects a ML analytic, the server(s) 520 can be configured to perform the ML analytics using the ML homomorphically encrypted data structures in a homomorphic scheme on the instances 530 and thereby, obtain encrypted result of the ML analytics.”; and [0083]: “FIG. 6b shows a neuron 600′ that can be in a secure neural network analytic. Security is provided by encrypting the neural network weights Wn 620 using a fully homomorphic encryption scheme, such as BFV or CKKS thereby generating E(Wn) 650. The homomorphic scheme used must support addition and multiplication operations using encrypted values. The encrypted weights Wn 650 are then multiplied with the unencrypted real values extracted from the instance Xn 610, producing encrypted values that are summed and fed into the activation function(s) 630 of the first layer of artificial neurons. If the activation function 630 is a polynomial function, it can be computed directly on the encrypted values; otherwise, it is replaced with a polynomial approximation function chosen in advance (see FIG. 7). The encrypted values output by the activation functions then move through the rest of the neural network in this way until they reach the designated output signals, producing an encrypted prediction, classification or other result 640′.”), and
transmit the encrypted classification to a user computing device for decryption (see Williams, [0003]: “Thereby, the encrypted analytic can be sent to an untrusted environment, be evaluated against data under the untrusted party's control, and generate an encrypted prediction, classification or other result which can be transmitted back to a trusted environment.”; [0008]: “The method may further allow sending, by the at least one server, the at least one encrypted result to the client, wherein the client is configured to decrypt the at least one encrypted result using the homomorphic encryption scheme.”; and [0045]: “the server(s) 110 can be further configured to perform the encrypted analytics on the data source using the same homographic encryption scheme and the public key received from the client 105 and, thereby, obtain encrypted results of the analytics. The encrypted results can be sent to the client(s) 105. The client(s) 105 can decrypt the encrypted results using the private key. Because the private key is always kept on the client(s) 105, neither encrypted analytic nor encrypted results of the analytics can be decrypted on the server 110 or when intercepted while in transition between the client(s) 105 and the server(s) 110”).
As per claim 12, Williams teaches a computer-implemented method for classifying encrypted data using an encrypted machine learning (ML) model, the method comprising:
receiving, at one or more processors, encrypted data from a user, wherein the encrypted data is encrypted in accordance with a first fully homomorphic encryption technique;
analyzing, by the one or more processors executing an encrypted ML model that is encrypted in accordance with a second fully homomorphic encryption technique, the encrypted data to output an encrypted classification without decrypting the encrypted data; and
transmitting, by the one or more processors, the encrypted classification to a user computing device for decryption (see Claim 1 rejection above).
As per claim 20, Williams teaches a non-transitory computer readable medium comprising instructions for classifying encrypted data using an encrypted machine learning (ML) model that, when executed, may cause one or more processors (see Williams, pg.8, Claim 20: “A non-transitory computer-readable storage medium having embodied thereon instructions, which when executed by at least one processor, perform steps of a method”) to:
receive encrypted data from a user, wherein the encrypted data is encrypted in accordance with a first fully homomorphic encryption technique;
analyze, by executing an encrypted ML model that is encrypted in accordance with a second fully homomorphic encryption technique, the encrypted data to output an encrypted classification without decrypting the encrypted data; and
transmit the encrypted classification to a user computing device for decryption (see Claim 1 rejection above).
DEPENDENT:
As per claims 2 and 13, which respectively depend on claims 1 and 12, Williams further teaches wherein the set of computer-readable instructions, when executed, further cause the system to:
train a ML model with a set of training encrypted data as inputs to output a set of training encrypted classifications of the set of training encrypted data (see Williams, [0003]: “Advantageously, a homomorphic encrypted analytic can execute on a server in an unsecure environment and there by obfuscate information about the analytic that could be derived by examination of the analytic. This information could include the information about computation being performed, intellectual property, proprietary information, sensitive information, or protected classes of information. Specifically, the analytics include trained machine learning models, sent in a homomorphic encrypted scheme, and executed in an unsecure environment.”); and
encrypt the ML model using the second fully homomorphic encryption technique to create the encrypted ML model (see Williams, [0003]: “Specifically, the analytics include trained machine learning models, sent in a homomorphic encrypted scheme, and executed in an unsecure environment.”; [0008]: “The method may include receiving, from a client, by at least one server from a client, at least one machine learning data structure. The at least one machine learning data structure can be encrypted using a homomorphic encryption scheme.”).
As per claims 3 and 14, which respectively depend on claims 2 and 13, Williams further teaches wherein encrypting the ML model further comprises: encrypt one or more parameters of the ML model using the second fully homomorphic encryption technique (see Williams, [0037]: “The disclosed systems and methods include techniques for using homomorphic encryption to encrypt parts of an already-trained ML (machine learning) model.”).
As per claims 4 and 15, which respectively depend on claims 1 and 12, Williams further teaches wherein the encrypted ML model comprises: (i) a Naive Bayes model, (ii) a Decision Tree, or (iii) a Random Forest model (see Williams, Abstract: “The machine learning model includes neural networks and decision trees.”).
As per claims 5 and 16, which respectively depend on claims 1 and 12, Williams further teaches wherein the first fully homomorphic encryption technique or the second fully homomorphic encryption technique further includes a private key encryption technique (see Williams, [0031]: “The encryption scheme may include a public key for encryption and a private key for decryption”).
As per claims 6 and 17, which respectively depend on claims 1 and 12, Williams further teaches wherein the first fully homomorphic encryption technique is different from the second fully homomorphic encryption technique (see Williams, [0009]: “The fully homomorphic encryption scheme may include at least one of a Brakerski/Fan-Vercauteren and a Cheon-Kim-Kim-Song cryptosystem.”; and [0077]: “a fully homomorphic encryption scheme is used including but not limited to BFV (Brakerski/Fan-Vercauteren) and CKKS (Cheon-Kim-Kim-Song)”).
As per claims 7 and 18, which respectively depend on claims 1 and 12, Williams further teaches wherein the encrypted data from the user comprises: (i) a set of encrypted values, (ii) a set of additively homomorphic monotone functions, (iii) a set of indexes, (iv) a q1-bit integer, or (v) an n-tuple q1-bit integer (see Williams, [0071]: “The server(s) 520 receives homomorphically encrypted data structures 516 associated with a trained ML analytic, and executed in the homomorphically encrypted scheme. Thus, information about the ML analytic is obfuscated from parties in the untrusted environment”; and [0078]: “The HED module 514 homomorphically encrypts the ML analytic data structure 516 which is transmitted to the server(s) 520.”).
As per claims 8 and 19, which respectively depend on claims 1 and 12, Williams further teaches wherein the set of computer-readable instructions, when executed, further cause the system to: receive a second set of encrypted data from a second user that is encrypted in accordance with the second fully homomorphic encryption technique, and wherein the second set of encrypted data comprises: (i) a feature index at a node, (ii) a database, or (iii) a private encryption key for the database (see Williams, [0044]: “The analytics can be encrypted with a public (encryption) key of the homomorphic encryption scheme. The encrypted analytics and the public key can be sent to the server 110. The encrypted analytics can be only decrypted with a private (decryption) key of the homomorphic encryption scheme. The decryption key can be kept on the client(s) 105 and never provided to the server(s) 110.”).
As per claim 9, which depends on claim 1, Williams further teaches wherein the set of computer-readable instructions, when executed, further cause the system to: produce an encrypted update to a ML model with a set of training encrypted data as inputs from one or more users; transmit the encrypted update to each user of the one or more users for decryption; and update the ML model using the encrypted update (see Williams, [0032]: “Optionally, the method can comprise a step of transmitting the encryption key from the client, especially the first client, to the central aggregator and/or in opposite direction. In particular, this step of transmitting can be executed in an encrypted way, e.g., by using asymmetric encryption”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Williams et al. (US 2020/0204341 A1) in view of Prasad et al. (US 2024/0015003 A1).
As per claim 10, which depends on claim 1, although Williams further teaches wherein the set of computer-readable instructions, when executed, further cause the system to: receive, from a user, a training dataset encrypted using an independently generated key pair, in accordance with a multi-key, multi-hop fully homomorphic encryption technique (see Williams, [0031]: “The encryption scheme may include a public key for encryption and a private key for decryption”); train, with the training datasets, a ML model using a ML training technique (see Williams, [0003]: “Specifically, the analytics include trained machine learning models, sent in a homomorphic encrypted scheme, and executed in an unsecure environment. Thereby, the encrypted analytic can be sent to an untrusted environment, be evaluated against data under the untrusted party's control, and generate an encrypted prediction, classification or other result which can be transmitted back to a trusted environment. The decrypted result will be the same as if the unencrypted machine analytic operated on the data.”); receive encrypted test datasets from a user (see Claim 1 rejection above); generate, by executing the ML model, encrypted outputs for the encrypted test dataset; and cause user computing devices of to participate in on-the-fly, computation to decrypt one or more respective encrypted outputs of the encrypted outputs by transmitting the encrypted outputs to the user (see Claim 1 rejection above),
Williams does not explicitly teach a plurality of users, receive encrypted test data from plurality of users, output for plurality of test data, participate multiparty computation by transmitting the encrypted output to each respective user of the one or more user.
Prasad teaches a plurality of users, receive encrypted test data from plurality of users, output for plurality of test data, participate multiparty computation by transmitting the encrypted output to each respective user of the one or more user (see Prasad, [0020]: “Federated learning is a technique to develop a robust quality shared global model with a central aggregator (e.g. central server) from isolated data among many different clients. In a healthcare application scenario, assume there are K clients (nodes) where each client k holds its respective data Dk with nk total number of samples.”).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of Williams in view of Prasad so that the system employs a plurality of users, receive encrypted test data from plurality of users, output for plurality of test data, participate multiparty computation by transmitting the encrypted output to each respective user of the one or more user. One would be motivated to do so because Williams teaches in paragraphs [0042], “In some embodiments, the server(s) 110 may be implemented as cloud-based computing resource shared by multiple users. The cloud-based computing resource(s) can include hardware and software available at a remote location and accessible over a network (for example, the Internet). The cloud-based computing resource(s) can be dynamically re-allocated based on demand. The cloud-based computing resources may include one or more server farms/clusters including a collection of computer servers which can be co-located with network switches and/or routers.”, and further teaches in paragraph [0068], “These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users).”.
As per claim 11, which depends on claim 10, Williams further teaches wherein the set of computer-readable instructions, when executed, further cause the system to: receive, from a new user, a new training dataset encrypted using a new independently generated key pair, in accordance with the multi-key, multi-hop fully homomorphic encryption technique; and update the ML model with the new training dataset using the ML training technique without re-encrypting the training datasets (see Williams, [0042]: “In some embodiments, the server(s) 110 may be implemented as cloud-based computing resource shared by multiple users. The cloud-based computing resource(s) can include hardware and software available at a remote location and accessible over a network (for example, the Internet). The cloud-based computing resource(s) can be dynamically re-allocated based on demand. The cloud-based computing resources may include one or more server farms/clusters including a collection of computer servers which can be co-located with network switches and/or routers.”).
Conclusion
7. For the reasons above, claims 1-20 have been rejected and remain pending.
8. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL Y WON whose telephone number is (571)272-3993. The examiner can normally be reached on Wk.1: M-F: 8-5 PST & Wk.2: M-Th: 8-7 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R Taylor can be reached on 571-272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Michael Won/Primary Examiner, Art Unit 2443