Prosecution Insights
Last updated: April 19, 2026
Application No. 18/103,007

NEURAL NETWORK INFERENCE UNDER HOMOMORPHIC ENCRYPTION

Non-Final OA §103
Filed
Jan 30, 2023
Examiner
MAIDO, MAGGIE T
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
85%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
23 granted / 36 resolved
+8.9% vs TC avg
Strong +21% interview lift
Without
With
+20.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
51 currently pending
Career history
87
Total Applications
across all art units

Statute-Specific Performance

§101
25.6%
-14.4% vs TC avg
§103
56.1%
+16.1% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103
DETAILED ACTION This action is responsive to claims filed on 30 January 2023 . Claims 1-20 are pending for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1 -6, 8-10, 13-17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the additional prior art reference(s) relied upon for the obviousness rejection." \d "[ 4 ]" Gharibi et al. (U.S. Pre-Grant Publication No. 20230306254 , hereinafter ‘ Gharibi ' ), in view of FILLIN "Insert the additional prior art reference(s) relied upon for the obviousness rejection." \d "[ 4 ]" Vaikuntanathan et al. (U.S. Pre-Grant Publication No. 20200036512 , hereinafter ' Vaikuntanathan ' ) and Xiong et al. (U.S. Patent No. 12001577, hereinafter 'Xiong' ) . Regarding claim 1 and analogous claims 1 0 , 20 , Gharibi teaches A computer-implemented method comprising: partitioning, into a client-side portion and a server-side portion, a trained neural network, the client-side portion comprising a first set of layers of the trained neural network, the server-side portion comprising a second set of layers of the trained neural network ([0041] FIG. 2 illustrates a split learning centralized approach. A a trained neural network model (neural network) 204 is partitioning, into a client-side portion and a server-side portion split into two parts: the client-side portion comprising a first set of layers of the trained neural network one part (206A, 208A, 210A) resides on the respective client side 206, 208, 210 and includes the input layer to the model and optionally other layers up to a cut layer, and the the server-side portion comprising a second set of layers of the trained neural network other part (B) resides on the server side 202 and often includes the output layer. Split layer (S) refers to the layer (the cut layer) where A and B are split. In FIG. 2 , SA represents a split layer or data sent from A to B and SB represents a split layer sent from B to A.) , the trained neural network trained using a first set of training data ([0079] Next is discussed a new approach to training which uses different types of training data together to train a neural network, using the blind learning approaches disclosed herein.) ; and Gharibi fails to teach computing, from a homomorphically encrypted intermediate result input to the server-side portion, a homomorphically encrypted output of the trained neural network, the homomorphically encrypted intermediate result comprising a homomorphically encrypted output computed by the client-side portion. Vaikuntanathan teaches computing, from a homomorphically encrypted intermediate result input to the server-side portion, a homomorphically encrypted output of the trained neural network ([0047] 5. input to the server-side portion At the computation server 120 end, in a separate or integral trusted hardware device 150, the computationally light (e.g., relatively smaller number or complexity of) non-linear steps are performed as follows: (a) the computation server sends the encrypted intermediate computation state (encrypted under the public key PK) and the encryption of SK (encrypted under the public key PKhw) to the trusted hardware; (b) the trusted hardware decrypts the latter using its stored secret key SKhw to retrieve SK; (c) using the secret key SK, the trusted hardware decrypts from a homomorphically encrypted intermediate result input to the server-side portion the encrypted intermediate computation state and computing a homomorphically encrypted output of the trained neural network performs the light non-linear operations, e.g., in parallel, or at an overlapping time, to the HE device performing the linear operations (step 4); (d) the trusted hardware re-encrypts the result of the non-linear operations using PK; and (e) the trusted hardware sends the encryption result of the non-linear operations to the computation server.) , Gharibi and Vaikuntanathan are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Gharibi , it would have been obvious for a person of ordinary skill in the art to apply the teachings of Vaikuntanathan to Gharibi before the effective filing date of the claimed invention in order to encrypt secret data to safely share with one or more external or third parties, which can then remotely operate on the data, without exposing the underlining secret data to an untrusted party (cf. Vaikuntanathan , [0002] Embodiments of the invention are directed to data privacy, security, and encryption of secret data. In particular, embodiments of the invention are directed to secure multi-party collaborations, in which one or more parties encrypt their secret data to safely share with one or more external or third parties, which can then remotely operate on the data, without exposing the underlining secret data to an untrusted party. ). Xiong teaches the homomorphically encrypted intermediate result comprising a homomorphically encrypted output computed by the client-side portion ([Col. 14, Lines 24-43] In an embodiment, the process 700 comprises obtaining 702 input data to be processed using a machine learning model, the obtained input data being in plaintext form, such as described above. The input data can be, for example, inputs 208 described above in connection with FIG. 2. With the obtained input data, the process 700 can comprise using 704 a first portion of the machine learning model to calculate plaintext intermediate data, where the first portion of the machine learning model is in plaintext form and a second portion of the machine learning model is in ciphertext form according to a homomorphic encryption scheme, such as described above. The plaintext intermediate data can be, for example, outputs 218 described above in computed by the client-side portion connection with FIG. 2. the homomorphically encrypted intermediate result With the plaintext intermediate data obtained, the process 700 can comprise using 706 the homomorphic encryption scheme to encrypt the plaintext intermediate data to obtain ciphertext intermediate data. The ciphertext intermediate data can be, for example, comprising a homomorphically encrypted output outputs of the homomorphic cipher function 220 described above in connection with FIG. 2.) . Gharibi, Vaikuntanathan, and Xiong are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Gharibi and Vaikuntanathan , it would have been obvious for a person of ordinary skill in the art to apply the teachings of Xiong to Gharibi before the effective filing date of the claimed invention in order to protect proprietary machine learning models against intellectual property theft, computations can be performed by a device without providing the device the ability to determine what the underlying portions of the model are in plaintext form (cf. Xiong, [Col. 2, Lines 6-23] To protect proprietary machine learning models against intellectual property theft, various techniques described below use a homomorphic encryption scheme to encrypt a portion of a machine learning model, such as one or more layers of a neural network. Plaintext input data to be input to the encrypted portion of the model is likewise encrypted using the homomorphic encryption scheme. With the plaintext input data and portion of the machine learning model encrypted, computations can be performed over the encrypted input data and the encrypted portion of the model without decrypting either. Similarly, if output from such operations are to be input into another encrypted portion of the machine learning model, additional computations involved with the other portion of the machine learning model can be performed without decryption. In this manner, computations can be performed by a device without providing the device the ability to determine what the underlying portions of the model are in plaintext form. ). Regarding claim 2 and analogous claim 13 , Gharibi , as modified by Vaikuntanathan and Xiong , teaches The computer-implemented method of claim 1 and The computer program product of claim 10, respectively . Gharibi teaches wherein the partitioning is performed using a received partition location ([0041] Split layer (S) refers to the layer partitioning is performed using a received partition location (the cut layer) where A and B are split. In FIG. 2 , SA represents a split layer or data sent from A to B and SB represents a split layer sent from B to A.) . Gharibi, Vaikuntanathan, and Xiong are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 3 and analogous claim 14 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer-implemented method of claim 1 and The computer program product of claim 10, respectively. Xiong teaches wherein the partitioning is performed using a partition location computed using data of a system computing the homomorphically encrypted output of the trained neural network ([Col. 2, Lines 60-64] In some examples, the partitioning is performed first portion of the machine learning model comprises a set of layers of a neural network and the second portion of the machine learning model comprises a set of second layers of the neural network.; [Col. 5, Lines 31-37] As described in FIG. 1, the neural network 108 may have multiple layers and each layer comprises sequences of operation, wherein the using a partition location output of one layer and operation can be used as input to another layer and operation. The neural network 108 partially encrypts the input plaintext 106 into ciphertext output 110, which comprises both partial ciphertext and partial plaintext output data.; [Col. 8, Lines 11-18] The neural network 210, in this example, processes the inputs 208 through the first layers of the neural network to result in intermediate inputs 218, which can be output of the plaintext layers 212, sends the intermediate inputs 218 through the homomorphic cipher function 216, which results in partially encrypted input data 220. This partially computed using data of a system computing the homomorphically encrypted output of the trained neural network encrypted output data 220 is made up of plaintext output data and ciphertext output data.; [Col. 18, Lines 5-11] The data store 910, in an embodiment, is operable, through logic associated therewith, to receive instructions from the application server 908 and obtain, update or otherwise process data in response thereto, and the application server 908 provides static, dynamic, or a combination of static and dynamic data in response to the received instructions.) . Gharibi, Vaikuntanathan, and Xiong are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 4 and analogous claim 15 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer-implemented method of claim 3 and The computer program product of claim 14, respectively. Xiong teaches wherein the partition location is computed using static data of the system computing the homomorphically encrypted output of the trained neural network ([Col. 2, Lines 60-64] In some examples, the first portion of the machine learning model comprises a set of layers of a neural network and the second portion of the machine learning model comprises a set of second layers of the neural network.; [Col. 5, Lines 31-37] As described in FIG. 1, the neural network 108 may have multiple layers and each layer comprises sequences of operation, wherein the partition location output of one layer and operation can be used as input to another layer and operation. The neural network 108 partially encrypts the input plaintext 106 into ciphertext output 110, which comprises both partial ciphertext and partial plaintext output data.; [Col. 8, Lines 11-18] The neural network 210, in this example, processes the inputs 208 through the first layers of the neural network to result in intermediate inputs 218, which can be output of the plaintext layers 212, sends the intermediate inputs 218 through the homomorphic cipher function 216, which results in partially encrypted input data 220. This partially of the system computing the homomorphically encrypted output of the trained neural network encrypted output data 220 is made up of plaintext output data and ciphertext output data.; [Col. 18, Lines 5-11] The data store 910, in an embodiment, is operable, through logic associated therewith, to receive instructions from the application server 908 and obtain, update or otherwise process data in response thereto, and the application server 908 provides is computed using static data static, dynamic, or a combination of static and dynamic data in response to the received instructions.) . Gharibi, Vaikuntanathan, and Xiong are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 5 and analogous claim 16 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer-implemented method of claim 3 and The computer program product of claim 14, respectively. Xiong teaches wherein the partition location is computed using dynamic data of the system computing the homomorphically encrypted output of the trained neural network ([Col. 2, Lines 60-64] In some examples, the first portion of the machine learning model comprises a set of layers of a neural network and the second portion of the machine learning model comprises a set of second layers of the neural network.; [Col. 5, Lines 31-37] As described in FIG. 1, the neural network 108 may have multiple layers and each layer comprises sequences of operation, wherein the partition location output of one layer and operation can be used as input to another layer and operation. The neural network 108 partially encrypts the input plaintext 106 into ciphertext output 110, which comprises both partial ciphertext and partial plaintext output data.; [Col. 8, Lines 11-18] The neural network 210, in this example, processes the inputs 208 through the first layers of the neural network to result in intermediate inputs 218, which can be output of the plaintext layers 212, sends the intermediate inputs 218 through the homomorphic cipher function 216, which results in partially encrypted input data 220. This partially of the system computing the homomorphically encrypted output of the trained neural network encrypted output data 220 is made up of plaintext output data and ciphertext output data.; [Col. 18, Lines 5-11] The data store 910, in an embodiment, is operable, through logic associated therewith, to receive instructions from the application server 908 and obtain, update or otherwise process data in response thereto, and the application server 908 provides static, is computed using dynamic data dynamic, or a combination of static and dynamic data in response to the received instructions.) . Gharibi, Vaikuntanathan, and Xiong are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 6 and analogous claim 17 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer-implemented method of claim 1 and The computer program product of claim 10, respectively. Gharibi teaches further comprising: further training, using a second set of training data, the server-side portion, the further training resulting in a further trained neural network with an improved accuracy from an accuracy of the trained neural network ([0038] The training process in this case includes a server 102 creating a model 104 and sharing the model 106A, 108A and 110A with respective clients 106, 108, 110 in a linear fashion. The clients train the respective model 106A, 108A, 110A separately when they receive the model on their turn and respectively send their further training, using a second set of training data trained model data the server-side portion back to the server 102 as shown. The server 102 averages the models and produces a new model 104 with updated weights (a.k.a the further training resulting in a further trained neural network a trained model). The server 102 sends the new model or weights to the respective clients 106, 108, 110 in a linear fashion. The process is repeated a number of iterations or with an improved accuracy from an accuracy of the trained neural network until a specific accuracy is achieved.) . Gharibi, Vaikuntanathan, and Xiong are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 8 and analogous claim 19 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer-implemented method of claim 6 and The computer program product of claim 17, respectively. Gharibi teaches further comprising: second partitioning, into a second client-side portion and a second server-side portion, the further trained neural network, wherein the second client-side portion has a different number of layers from the client-side partition, wherein the second partitioning is performed responsive to determining that an accuracy of the further trained neural network is less than a threshold accuracy ([0090] The training of the network is done by a sequence of distributed training processes. The forward propagation and the back-propagation can take place as follows. With the raw data, a client (say client 702) trains the client-side network 702A up to a certain layer of the network, which can be called the cut layer or the split layer, and sends the activations of the cut layer to the server 710. The server 710 trains the remaining layers of the NN with the activations that it received from the client 702. This completes a single forward propagation step. second partitioning, into a second client-side portion and a second server-side portion, the further trained neural network A similar process occurs in parallel for the wherein the second client-side portion has a different number of layers from the client-side partition second client 704 and its client side network 704A and its data and generated activations which are transmitted to the server 710. A further similar process occurs in parallel for the third client 706 and its client side network 706A and its data and generated activations which are transmitted to the server 710.; [0095] As introduced above, client one 702, client two 704 and client three 706 could have different data types. The server 710 will create two parts of the network and sends one part 702A, 704A, 706A to all the clients 702, 704, 706. The system second partitioning is performed responsive to determining that an accuracy of the further trained neural network is less than a threshold accuracy repeats certain steps until an accuracy condition or other condition is met, such as all the clients sending data to the part of the network that they have, and sends the output to the server 710. ); and retraining, using a third set of training data, the second server-side portion, the retraining resulting in an accuracy improvement from the further trained neural network ([0095] As introduced above, client one 702, client two 704 and client three 706 could have different data types. The server 710 will create two parts of the network and sends one part 702A, 704A, 706A to all the clients 702, 704, 706. The system repeats certain steps until an accuracy condition or other condition is met, such as all the clients sending data to the part of the network that they have, and sends the output to the server 710. The server 710 calculates the loss value for each client and the average loss across all the clients. The retraining, using a third set of training data, the second server-side portion server 710 can update its model using a weighted average of the gradients that it computes during back-propagation and sends the gradients back to all the clients 702, 704, 706. The clients 702, 704, 706 receives the gradients from the server 710 and each client 702, 704, 706 performs the back-propagation on their client-side network 702A, 704A, 706A and computes the respective gradients for each client-side-network 702A, 704A, 706A. The respective gradients from the client-side networks 702A, 704A, 706A can then be transmitted back to the server 710 which conducts an averaging of the client-side updates and sends the global result back to all the clients 702, 704, 706.; [0050] retraining resulting in an accuracy improvement from the further trained neural network FIG. 4 illustrates the improvement to training neural networks disclosed herein. This improvement can be characterized as a blind learning approach and addresses some of the deficiencies of the approaches disclosed above. FIG. 4 introduces a parallel processing approach. The parallel and independent processing causes the model training to occur at a faster pace than the other models described above.) . Gharibi, Vaikuntanathan, and Xiong are combinable for the same rationale as set forth above with respect to claim 1. Regarding claim 9 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer-implemented method of claim 1. Xiong teaches wherein the homomorphically encrypted intermediate result is computed by the client-side portion in an unencrypted form and subsequently homomorphically encrypted ([Col. 14, Lines 24-43] In an embodiment, the process 700 comprises obtaining 702 input data to be processed using a machine learning model, the obtained input data being in plaintext form, such as described above. The input data can be, for example, inputs 208 described above in connection with FIG. 2. With the obtained input data, the process 700 can comprise using 704 a first portion of the machine learning model to calculate plaintext intermediate data, where the first portion of the machine learning model is in plaintext form and a second portion of the machine learning model is in ciphertext form according to a homomorphic encryption scheme, such as described above. The plaintext intermediate data can be, for example, outputs 218 described above in connection with FIG. 2. wherein the homomorphically encrypted intermediate result is computed by the client-side portion in an unencrypted form With the plaintext intermediate data obtained, the process 700 can comprise using 706 the homomorphic encryption scheme to and subsequently homomorphically encrypted encrypt the plaintext intermediate data to obtain ciphertext intermediate data. The ciphertext intermediate data can be, for example, outputs of the homomorphic cipher function 220 described above in connection with FIG. 2.) . Gharibi, Vaikuntanathan, and Xiong are combinable for the same rationale as set forth above with respect to claim 1. Claim s 7 , 18 are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the additional prior art reference(s) relied upon for the obviousness rejection." \d "[ 4 ]" Gharibi , in view of FILLIN "Insert the additional prior art reference(s) relied upon for the obviousness rejection." \d "[ 4 ]" Vaikuntanathan , Xiong , and further in view of Barham et al. (U.S. Pre-Grant Publication No. 20200228339 , hereinafter 'Barham' ). Regarding claim 7 and analogous claim 18 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer-implemented method of claim 6 and The computer program product of claim 17, respectively. Gharibi, as modified by Vaikuntanathan and Xiong , fails to teach wherein the first set of training data is public and the second set of training data is nonpublic. Barham teaches wherein the first set of training data is public and the second set of training data is nonpublic ([0016] In embodiments, the neural network classifier, or a wherein the first set of training data is public subset of the layers of the neural network classifier may be trained using publicly-available non-private biometric information, and and the second set of training data is nonpublic other layers of the neural network classifier may be re-trained using private biometric information of a specific individual. For example, to reduce the memory and speed requirements, the whole Neural-Network model may be trained using publicly available biometric data (that doesn't have any privacy constraints). The weights of, for example, the first few layers may be fixed or store and then, during enrollment, the remaining layers may be retrained using private, person-specific, biometric data. Thus, for example, only the last few layers need be encrypted. During verification, the biometric features may be first fed to the not-encrypted NN layers, then they may be encrypted and sent to the encrypted model. The trained weights may be encrypted 210 using, for example, homomorphic encryption and transmitted 114 to server 102, which may store the encrypted neural network 108 for that person. Homomorphic encryption allows computation on encrypted data such that when the results of the computation on the encrypted data is decrypted, the results are the same as if the computation had been performed on the unencrypted or plaintext data.) . Gharibi , Vaikuntanathan , Xiong, and Barham are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Gharibi , Vaikuntanathan , and Xiong, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Barham to Gharibi before the effective filing date of the claimed invention in order to provide security claims that can be measured against current cryptographic solutions, such as symmetric and asymmetric methods, since embodiments may be include known and accepted cryptographic modules (cf. Barham , [ 0004] Embodiments of the present systems and methods may provide encrypted biometric information that can be stored and used for authentication with undegraded recognition performance. Embodiments may provide advantages over current techniques. For example, embodiments may provide security claims that can be measured against current cryptographic solutions, such as symmetric and asymmetric methods, since embodiments may be include known and accepted cryptographic modules. Further, the degradation in recognition performance rates can be described as a trade-off with memory and speed requirements, and for industry acceptable performance requirements embodiments may achieve both. ). Claim s 11 -12 are rejected under 35 U.S.C. 103 as being unpatentable over FILLIN "Insert the additional prior art reference(s) relied upon for the obviousness rejection." \d "[ 4 ]" Gharibi , in view of FILLIN "Insert the additional prior art reference(s) relied upon for the obviousness rejection." \d "[ 4 ]" Vaikuntanathan , Xiong , and further in view of Rakshit et al. (U.S. Pre-Grant Publication No. 20210344995, hereinafter 'Rakshit' ). Regarding claim 11 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer program product of claim 10 . Gharibi, as modified by Vaikuntanathan and Xiong , fails to teach wherein the stored program instructions are stored in a computer readable storage device in a data processing system, and wherein the stored program instructions are transferred over a network from a remote data processing system. Rakshit teaches wherein the stored program instructions are stored in a computer readable storage device in a data processing system ([0101] While it is understood that the process software (e.g., any of the instructions stored in instructions 560 of FIG. 5 and/or any software configured to perform any portion of the method described with respect to FIGS. 2-4 and/or implement any portion of the functionality discussed in FIG. 1) can be deployed by manually loading it directly in the client, server, and proxy computers via loading a storage medium such as a CD, DVD, etc., the process software can also be automatically or semi-automatically deployed into a computer system by sending the process software to a central server or a group of central servers. The process software is then downloaded into the client computers that will execute the process software.) , and wherein the stored program instructions are transferred over a network from a remote data processing system ([0062] FIG. 5 illustrates a block diagram of an example computer 500 in accordance with some embodiments of the present disclosure. In various embodiments, computer 500 can perform any or all of the method described in FIGS. 2-4 and/or implement the functionality discussed in any one of FIG. 1. In some embodiments, computer 500 receives instructions related to the aforementioned methods and functionalities by wherein the stored program instructions are transferred over a network from a remote data processing system downloading processor-executable instructions from a remote data processing system via network 550.) . Gharibi, Vaikuntanathan, Xiong, and Rakshit are considered to be analogous to the claimed invention because they are in the same field of machine learning. In view of the teachings of Gharibi , Vaikuntanathan , and Xiong, it would have been obvious for a person of ordinary skill in the art to apply the teachings of Rakshit to Gharibi before the effective filing date of the claimed invention in order to monitor, control, and report, provide transparency for both the provider and consumer of the utilized service (cf. Rakshit , [ 0076] Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. ). Regarding claim 12 , Gharibi, as modified by Vaikuntanathan and Xiong, teaches The computer program product of claim 10. Gharibi, as modified by Vaikuntanathan and Xiong, fails to teach wherein the stored program instructions are stored in a computer readable storage device in a server data processing system, and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system, further comprising: program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use. Rakshit teaches wherein the stored program instructions are stored in a computer readable storage device in a server data processing system ([0101] While it is understood that the process software (e.g., any of the instructions stored in instructions 560 of FIG. 5 and/or any software configured to perform any portion of the method described with respect to FIGS. 2-4 and/or implement any portion of the functionality discussed in FIG. 1) can be deployed by manually loading it directly in the client, server, and proxy computers via loading a storage medium such as a CD, DVD, etc., the process software can also be automatically or semi-automatically deployed into a computer system by sending the process software to a central server or a group of central servers. The process software is then downloaded into the client computers that will execute the process software.) , and wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system for use in a computer readable storage device associated with the remote data processing system, further comprising ([0062] FIG. 5 illustrates a block diagram of an example computer 500 in accordance with some embodiments of the present disclosure. In various embodiments, computer 500 can perform any or all of the method described in FIGS. 2-4 and/or implement the functionality discussed in any one of FIG. 1. In some embodiments, for use in a computer readable storage device associated with the remote data processing system computer 500 receives instructions related to the aforementioned methods and functionalities by wherein the stored program instructions are downloaded in response to a request over a network to a remote data processing system downloading processor-executable instructions from a remote data processing system via network 550.) : program instructions to meter use of the program instructions associated with the request; and program instructions to generate an invoice based on the metered use ([0091] In one example, management layer 80 may provide the functions described below. Resource provisioning 81 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. program instructions to meter use of the program instructions associated with the request Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and program instructions to generate an invoice based on the metered use billing or invoicing for consumption of these resources.) . Gharibi, Vaikuntanathan, Xiong, and Rakshit are combinable for the same rationale as set forth above with respect to claim 11. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gomez et al. (U.S. Pre-Grant Publication No. 20200036510 ) teaches systems and methods receiving input data to be processed by an encrypted neural network (NN) model, and encrypting the input data using a fully homomorphic encryption (FHE) public key associated with the encrypted NN model to generate encrypted input data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT MAGGIE MAIDO whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (703) 756-1953 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-Th: 6am - 4pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Michael Huntley can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT (303) 297-4307 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MM/ Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/ Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Jan 30, 2023
Application Filed
Nov 01, 2023
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602603
MULTI-AGENT INFERENCE
2y 5m to grant Granted Apr 14, 2026
Patent 12596933
CONTEXT-AWARE ENTITY LINKING FOR KNOWLEDGE GRAPHS TO SUPPORT DECISION MAKING
2y 5m to grant Granted Apr 07, 2026
Patent 12579463
GENERATIVE REASONING FOR SYMBOLIC DISCOVERY
2y 5m to grant Granted Mar 17, 2026
Patent 12579452
EVALUATION SCORE DETERMINATION MACHINE LEARNING MODELS WITH DIFFERENTIAL PERIODIC TIERS
2y 5m to grant Granted Mar 17, 2026
Patent 12566941
EXTENSION OF EXISTING NEURAL NETWORKS WITHOUT AFFECTING EXISTING OUTPUTS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
85%
With Interview (+20.7%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month