DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-15 are currently pending.
Information Disclosure Statement
The Information Disclosure Statement (IDS) submitted by Applicant on 10/25/2022 has been considered.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: “Neural Networks for Encrypted Data”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. (Step 1)
Claims 1 and 7 are apparatus and/or system type claims that recite “a neural network” and “an artificial neural network.” The means to implement the apparatus and/or system may be regarded as software per se because the apparatus and/or system does not recite any hardware as part of the system and/or the software is not tangibly embodied on any sort of physical medium.
Respective dependent claims 2-6 and 8-12 are also rejected under 35 U.S.C. 101 as they also do not recite any hardware for these system claims.
Claims 1-15 are further rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (abstract idea) without significantly more.
Regarding claim 1,
Step1: As stated above, Claim 1 is rejected because the claimed invention is directed to non-statutory subject matter.
Step 2A, Prong 1: Claim 1 recites the following limitations:
perform a first cryptographic operation on input data; (i.e., a person can mentally and/or with the aid of pen and paper encrypt data using by means of a cryptographic operation)
perform processing on the data; (i.e., a person can mentally process data by means of analysis, judgements, or observations)
perform a second cryptographic operation on the processed data. (i.e., a person can mentally and/or with the aid of pen and paper encrypt data using by means of a cryptographic operation)
Hence, the claim recites an abstract idea.
Step 2A, Prong 2: The additional elements of “a neural network” comprising a “first portion, comprising a plurality of layers of the neural network”, a “second portion comprising a plurality of layers of the neural network”, and a “third portion comprising a plurality of layers of the neural network” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “a neural network” comprising a “first portion, comprising a plurality of layers of the neural network”, a “second portion comprising a plurality of layers of the neural network”, and a “third portion comprising a plurality of layers of the neural network” to perform the steps stated above amounts to no more than mere instructions to apply the judicial exception using generic computer components (see MPEP 2106.05(f)). Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 2,
Step 2A, Prong 1: Claim 2 recites an abstract idea as inherited from claim 1, as stated above.
Step 2A, Prong 2: Claim 2 recites the additional elements of “wherein the input data is encrypted and the first cryptographic operation is a decryption operation; and/or the processed data is unencrypted and the second cryptographic operation is an encryption operation.” These additional elements merely generally link the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 2 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “wherein the input data is encrypted and the first cryptographic operation is a decryption operation; and/or the processed data is unencrypted and the second cryptographic operation is an encryption operation.” merely generally links the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 3,
Step 2A, Prong 1: Claim 3 recites an abstract idea as inherited from claim 1, as stated above.
Step 2A, Prong 2: Claim 3 recites the additional elements of “wherein a set of weights of the plurality of layers of the first portion represents a decryption key to decrypt encrypted input data” These additional elements merely generally link the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 3 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “wherein a set of weights of the plurality of layers of the first portion represents a decryption key to decrypt encrypted input data” merely generally links the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 4,
Step 2A, Prong 1: Claim 4 recites an abstract idea as inherited from claim 1, as stated above.
Step 2A, Prong 2: Claim 4 recites the additional elements of “wherein a set of weights of the plurality of layers of the third portion represents an encryption key to encrypt the processed data.” These additional elements merely generally link the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 4 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “wherein a set of weights of the plurality of layers of the third portion represents an encryption key to encrypt the processed data.” merely generally links the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 5,
Step 2A, Prong 1: Claim 5 recites an abstract idea as inherited from claim 1, as stated above.
Step 2A, Prong 2: Claim 5 recites the additional elements of “wherein the neural network is a modular neural network in which any of the first portion, second portion and third portion are substitutable” These additional elements merely generally link the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 5 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “wherein the neural network is a modular neural network in which any of the first portion, second portion and third portion are substitutable” merely generally links the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 6,
Step 2A, Prong 1: Claim 6 recites an abstract idea as inherited from claim 1, as stated above. Claim 6 further recites:
encrypt the processed data according to Advanced Encryption Standard, AES (i.e., under broadest reasonable interpretation, a person can encrypt data according to advanced encryption standard (AES) with the aid or pen and paper).
Hence the claim recites an abstract idea.
Step 2A, Prong 2: Claim 6 recites the additional elements of “the third portion” and “respective layers of the third portion” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 6 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “the third portion” and “respective layers of the third portion” to perform the steps stated above amounts to no more than mere instructions to apply the judicial exception using generic computer components (see MPEP 2106.05(f)). Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 7,
Step 1: As stated above, Claim 7 is rejected because the claimed invention is directed to non-statutory subject matter.
Step 2A, Prong 1: Claim 7 recites the following limitations:
An artificial neural network comprising:
convert ciphertext into plaintext; (i.e., a person can mentally or with the aid of pen and paper decrypt encrypted ciphertext data into plaintext) and
perform processing on the plaintext. (i.e., a person can mentally process plaintext by means of analysis, judgements, or observations)
Hence, the claim recites an abstract idea.
Step 2A, Prong 2: The additional elements of “an artificial neural network” comprising a “first plurality of layers”, and a “second plurality of layers” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 7 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of a “first plurality of layers”, and a “second plurality of layers” to perform the steps stated above amounts to no more than mere instructions to apply the judicial exception using generic computer components (see MPEP 2106.05(f)). Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 8,
Step 2A, Prong 1: Claim 8 recites an abstract idea as inherited from claim 7, as stated above. Claim 8 further recites:
convert an output of the processing into ciphertext (i.e., under broadest reasonable interpretation, a person can, with the aid of pen and paper, convert analyzed or processed data (i.e., an output of the processing) into ciphertext.)
Hence the claim recites an abstract idea.
Step 2A, Prong 2: Claim 8 recites the additional element of “a third plurality of layers” is recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 8 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “a third plurality of layers” to perform the steps stated above amounts to no more than mere instructions to apply the judicial exception using generic computer components (see MPEP 2106.05(f)). Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 9,
Step 2A, Prong 1: Claim 9 recites an abstract idea as inherited from claim 7, as stated above.
Step 2A, Prong 2: Claim 9 recites the additional elements of “wherein a set of weights of the third plurality of layers represents a key to convert the output of the processing into ciphertext.” These additional elements merely generally link the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 9 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “wherein a set of weights of the third plurality of layers represents a key to convert the output of the processing into ciphertext” merely generally links the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 10,
Step 2A, Prong 1: Claim 10 recites an abstract idea as inherited from claim 7, as stated above.
Step 2A, Prong 2: Claim 10 recites the additional elements of “wherein an output of the processing is output from the artificial neural network as plaintext”. These additional elements merely generally link the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 10 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “wherein an output of the processing is output from the artificial neural network as plaintext” merely generally links the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 11,
Step 2A, Prong 1: Claim 11 recites an abstract idea as inherited from claim 7, as stated above.
Step 2A, Prong 2: Claim 11 recites the additional elements of “wherein a set of weights of the first plurality of layers represents a key to convert the ciphertext to plaintext”. These additional elements merely generally link the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 10 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “wherein a set of weights of the first plurality of layers represents a key to convert the ciphertext to plaintext” merely generally links the use of the judicial exception to a particular technological environment or field of use. (see MPEP 2106.05(h)) Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 12,
Step 2A, Prong 1: Claim 12 recites an abstract idea as inherited from claims 7 and 8, as stated above. Claim 12 further recites:
encrypt the processed data according to Advanced Encryption Standard, AES (i.e., under broadest reasonable interpretation, a person can encrypt data according to advanced encryption standard (AES) with the aid or pen and paper).
Hence the claim recites an abstract idea.
Step 2A, Prong 2: Claim 12 recites the additional elements of “the third plurality of layers” and “respective layers of the third plurality of layers” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 12 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “the third plurality of layers” and “respective layers of the third plurality of layers” to perform the steps stated above amounts to no more than mere instructions to apply the judicial exception using generic computer components (see MPEP 2106.05(f)). Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 13,
Step 1: Claim 13 is directed towards a method.
Step 2A, Prong 1: Claim 13 recited the following limitations:
A method comprising:
performing processing on plaintext data, ; and (i.e., a person can mentally process plaintext data by means of analysis, observations, or judgments)
encrypting a result of the processing,. (i.e., a person can mentally or with the aid of pen and paper encrypt plaintext data).
Hence the claim recites an abstract idea.
Step 2A, Prong 2: The additional elements of “a processing part of a neural network” and a “an encryption part of the neural network” are recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 13 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “a processing part of a neural network” and a “an encryption part of the neural network” to perform the steps stated above amounts to no more than mere instructions to apply the judicial exception using generic computer components (see MPEP 2106.05(f)). Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 14,
Step 2A, Prong 1: Claim 14 recites an abstract idea as inherited from claim 13, as stated above. Claim 14 further recites:
decrypting ciphertext data into plaintext data (i.e., under broadest reasonable interpretation, a person can decrypt cyphertext data into plaintext data, with the aid of pen and paper)
Hence the claim recites an abstract idea.
Step 2A, Prong 2: Claim 14 recites the additional element of “a decryption part of the neural network” is recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Claim 14 further recites the additional element of “transferring the plaintext data to the processing part of the neural network for processing”. This additional element is considered added insignificant extra-solution activity consisting of mere data transmission. (see MPEP 2106.05(g)) Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 14 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “a decryption part of the neural network” is recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Claim 14 further recites the additional element of “transferring the plaintext data to the processing part of the neural network for processing”. This additional element has been considered added insignificant extra-solution activity consisting of mere data transmission. (see MPEP 2106.05(g)). As such, this additional element is further analyzed under Step 2B to see if it is more than what the courts have considered as well-understood, routine, and conventional activity in the field. The court decisions cited in MPEP 2106.05(d)(II) have held that mere data transmission over a network (as it is claimed in the present claim) is considered well-understood, routine, and conventional activity in the field supported by Berkheimer. (see also, i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Regarding claim 15,
Step 2A, Prong 1: Claim 15 recites an abstract idea as inherited from claim 13, as stated above. Claim 15 further recites:
encrypting … performed according to Advanced Encryption Standard, AES (i.e., under broadest reasonable interpretation, a person can encrypt data according to advanced encryption standard (AES) with the aid or pen and paper).
perform encryption procedures according to AES (i.e., under broadest reasonable interpretation, a person can encrypt data according to advanced encryption standard (AES) with the aid or pen and paper)
Hence the claim recites an abstract idea.
Step 2A, Prong 2: Claim 15 recites the additional element of “respective layers of the neural network” is recited at a high-level of generality, such that it amounts to no more than mere instructions to apply the judicial exception using generic computer components. (see MPEP 2106.05(f)). Hence the claim does not recite additional elements that integrate the judicial exception into a practical application. Since the claim as a whole, looking at the additional elements individually and in combination, does not contain any other additional elements that are indicative of integration into a practical application, the claim is directed to an abstract idea.
Step 2B: Claim 15 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of “respective layers of the neural network” to perform the steps stated above amounts to no more than mere instructions to apply the judicial exception using generic computer components (see MPEP 2106.05(f)). Hence the claim lacks limitations which amount to significantly more than the judicial exception or an inventive concept, and is rejected. Considering the additional elements individually and in combination, and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. Therefore, the claim is not patent eligible.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 7, 10, 13, and 14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Google LLC (WO2018027050A1 published Feb. 8, 2018).
Regarding claim 7, Google teaches an artificial neural network comprising: a first plurality of layers to convert ciphertext into plaintext; and a second plurality of layers to perform processing on the plaintext (Google [0050] teaches as with the symmetric transformation system 100 (Figure 1), the system 400 depicted in Figure 4 includes an encoder neural network 402, a trusted decoder neural network 404, and an adversary decoder neural network 406. The trusted decoder neural network 404 may be a trusted decoder that has been trained jointly with the encoder neural network 402, and the pair of networks 402, 404 may be adversarially trained with the adversary decoder neural network 406 as an adversary decoder network. The encoder neural network 402 is configured to process the primary neural network input 410 (e.g., a plaintext data item) and public neural network input key 414 to generate an encoded representation of the primary neural network input 416 (e.g., a ciphertext item). The trusted decoder neural network 404 is configured to process the encoded representation of the primary neural network input 416 [i.e., convert ciphertext into plaintext] and the secret neural network input key 412 to generate a first estimated reconstruction of the primary neural network input 418 [Note: the trusted decoder neural network converts cipher text into plaintext and further generates an estimated reconstruction of the primary neural network input, this being understood as “to perform processing on the plaintext”.).
Regarding claim 10, Google teaches all of the limitations of claim 7, and Google further teaches wherein an output of the processing is output from the artificial neural network as plaintext (Google [0050] teaches the encoder neural network 402 is configured to process the primary neural network input 410 (e.g., a plaintext data item) and public neural network input key 414 to generate an encoded representation of the primary neural network input 416 (e.g., a ciphertext item). The trusted decoder neural network 404 is configured to process the encoded representation of the primary neural network input 416 [i.e., converts ciphertext into plaintext] and the secret neural network input key 412 to generate a first estimated reconstruction of the primary neural network input 418 [Note: the trusted decoder neural network converts ciphertext into plaintext and further generates an estimated reconstruction of the primary neural network input, this being understood as “wherein an output of the processing is output from the artificial neural network as plaintext”).
Regarding claim 13, Google teaches a method comprising: performing processing on plaintext data, in a processing part of a neural network; and encrypting a result of the processing, in an encryption part of the neural network (Google, Fig. 1 teaches Encoder Neural Network 102 performing cryptographic operation on Primary Neural Network input 108 with Neural Network Input Key 110; Google [0018] teaches the encoder neural network can include one or more fully-connected layers followed by one or more convolutional layers; Google [0038] further teaches the encoder neural network 102 is configured to process a primary neural network input 108 and a neural network input key to generate an encoded representation of the primary neural network input 102. The encoded representation of the primary neural network input 112 is a modified version of the primary neural network input 108 that has been transformed according to parameters of the encoded neural network 102 and based on the neural network input key 110. In a cryptographic context, the encoder neural network 102 can be said to apply cryptographic transformations to the primary neural network input 112 using a key. For example, the primary neural network input 112 can be a plaintext representation of input data, the neural network input key 110 can be a cryptographic key, and the encoded representation of the primary neural network input 112 can be ciphertext, i.e., an encrypted version of the primary neural network input 112).
Regarding claim 14, Google teaches all of the limitations of claim 13, and Google further teaches further comprising decrypting ciphertext data into plaintext data, in a decryption part of the neural network; and transferring the plaintext data to the processing part of the neural network for processing (Google, [0018] teaches a second decoder neural network can include one or more fully connected layers followed by one or more convolutional layers; See Fig. 1, trusted decoder neural network 104; Google [0038] teaches trusted decoder neural network 104 has access to the neural network input key 110 that was used to generate the encoded representation of the primary neural network input 112. The trusted decoder neural network 104 processes the primary neural network 108 along with the neural network input key to generate the first estimated reconstruction to the primary neural network input 114; Google, [0043] further teaches at stage 202, the decoder neural network obtains two inputs: (i) an encoded representation of a primary neural network input, e.g., encoded representation 112, and (ii) a neural network input key that is distinct from the encoded representation of the primary neural network input, e.g., neural network input key 110. The encoded representation of the primary neural network input may have been generated by an encoder neural network, e.g., encoder neural network 102, by processing a primary neural network input, e.g., primary neural network input 108, and the neural network input key.; Google [0044] further teaches at stage 204, the decoder neural network processes the two inputs that were obtained at stage 202 to generate an estimated reconstruction of the primary neural network input, e.g., first estimated reconstruction 114. The estimated reconstruction of the primary neural network input is generated by processing the inputs through a series of transformations dictated by one or more hidden layers of the decoder neural network. At stage 206, the decoder neural network outputs the estimated reconstruction of the primary neural network input. The output can be stored or otherwise made available to an application or system that further processes the output, e.g., for presentation to a user.).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1 , 2, 3, 4, 5, 6, 8, 9, 11, 12, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Google LLC (WO2018027050A1 published Feb. 8, 2018) in view of Mody et al. (WO2018217965A1, published Nov. 29, 2018)
Regarding claim 1, Google teaches a neural network comprising:
a first portion, comprising a plurality of layers of the neural network, to perform a first cryptographic operation on input data (Google, Fig. 1 teaches Encoder Neural Network 102 performing cryptographic operation on Primary Neural Network input 108 with Neural Network Input Key 110; Google [0018] teaches the encoder neural network can include one or more fully-connected layers followed by one or more convolutional layers; Google [0038] further teaches the encoder neural network 102 is configured to process a primary neural network input 108 and a neural network input key to generate an encoded representation of the primary neural network input 102. The encoded representation of the primary neural network input 112 is a modified version of the primary neural network input 108 that has been transformed according to parameters of the encoded neural network 102 and based on the neural network input key 110. In a cryptographic context, the encoder neural network 102 can be said to apply cryptographic transformations to the primary neural network input 112 using a key. For example, the primary neural network input 112 can be a plaintext representation of input data, the neural network input key 110 can be a cryptographic key, and the encoded representation of the primary neural network input 112 can be ciphertext, i.e., an encrypted version of the primary neural network input 112);
a second portion, comprising a plurality of layers of the neural network, to perform processing on the data (Google, [0018] teaches in some implementations, the first decoder neural network can include one or more fully-connected layers followed by one or more convolutional layers; Google [0039] further teaches the trusted decoder neural network 104 and the adversary decoder neural network 106 are each configured to process the encoded representation of the primary neural network input 112 to generate estimated reconstructions of the primary neural network input 108. However, unlike the adversary decoder neural network 106, the trusted decoder neural network 104 has access to the neural network input key 110 that was used to generate the encoded representation of the primary neural network input 112.; See Fig. 1, adversary decoder neural network 106 [Note: the adversary decoder neural network to process the encoded representation of the primary neural network input as stated in [0039] being understood as the second portion to perform processing on the data, as claimed];
However, Google does not distinctly or clearly disclose and a third portion, comprising a plurality of layers of the neural network, to perform a second cryptographic operation on the processed data.
Nevertheless, Mody teaches and a third portion, comprising a plurality of layers of the neural network, to perform a second cryptographic operation on the processed data (Mody, Abstract and [0067] teaches a CNN based signal processing method (800) includes receiving of an encrypted output from a first layer of a multi-layer CNN data (802). The received encrypted output is subsequently decrypted to form a decrypted input to a second layer of the multi-layer CNN data (804). A convolution (808) of the decrypted input with a corresponding decrypted weight may generate a second layer output, which may be encrypted and used as an encrypted input to a third layer of the multi-layer CNN data; Mody [0004] further teaches the CNN based signal processing may include receiving of an encrypted output from a layer (such as a first layer, a first hidden layer, etc.) of the multi-layer CNN data. The received encrypted output is subsequently decrypted to form a decrypted input to a subsequent layer (such as second layer, hidden layer, final output layer, etc.) of the multi-layer CNN data. A convolution of the decrypted input with a corresponding decrypted weight may generate a second hidden layer output, which may be encrypted and used as an encrypted input to another hidden layer of the multi-layer CNN data. After the signal processing of the layers of the multi-layer CNN data, the image classification may be generated as final output.; Mody [0005] teaches for the decryption of inputs and/or weights, and the encryption of the output, a particular key may be stored and used for the decryptions and encryptions as described herein.; Mody [0042] teaches FIG. 4 illustrates an example block diagram of the secure IP block 202 as described herein. As shown, the CNN secure IP block 202 may include: an input feature decryption block 402 that may be configured to receive and decrypt the input layer 300; a weight kernel decryption block 404 that may be configured to receive and decrypt the weight 310 stored from the external memories; an output feature encryption block 406 that may be configured to encrypt convolution outputs from the CNN HW engine 200; [Note: Mody has been understood to encrypt input at an input layer, decrypt the input from the first layer, and then re-encrypt the output by another layer, this re-encryption by another layer understood to read on “to perform a second cryptographic operation on the processed data”.)
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have modified the systems, methods, devices, and other techniques for training and using neural networks to encode inputs and to process encoded inputs, as taught by Google, to include the encrypted output of the secure convolutional neural network accelerator, as taught by Mody. Encrypting the output is implemented to prevent malicious attempts to provide a fixed pattern input to a given layer and allow the ability to decode the output and determine the weight of the given layer (and other layers).
Regarding claim 2, Google in view of Mody teaches all of the limitations of claim 1, and the combination further teaches wherein the input data is encrypted and the first cryptographic operation is a decryption operation (Google, [0018] teaches in some implementations, the first decoder neural network can include one or more fully-connected layers followed by one or more convolutional layers; Google [0039] further teaches the trusted decoder neural network 104 and the adversary decoder neural network 106 are each configured to process the encoded representation of the primary neural network input 112 to generate estimated reconstructions of the primary neural network input 108. However, unlike the adversary decoder neural network 106, the trusted decoder neural network 104 has access to the neural network input key 110 that was used to generate the encoded representation of the primary neural network input 112.; See Fig. 1, adversary decoder neural network 106); and/or the processed data is unencrypted and the second cryptographic operation is an encryption operation (Mody [0004] further teaches the CNN based signal processing may include receiving of an encrypted output from a layer (such as a first layer, a first hidden layer, etc.) of the multi-layer CNN data. The received encrypted output is subsequently decrypted to form a decrypted input to a subsequent layer (such as second layer, hidden layer, final output layer, etc.) of the multi-layer CNN data. A convolution of the decrypted input with a corresponding decrypted weight may generate a second hidden layer output, which may be encrypted and used as an encrypted input to another hidden layer of the multi-layer CNN data. After the signal processing of the layers of the multi-layer CNN data, the image classification may be generated as final output.; Mody [0042] teaches FIG. 4 illustrates an example block diagram of the secure IP block 202 as described herein. As shown, the CNN secure IP block 202 may include: an input feature decryption block 402 that may be configured to receive and decrypt the input layer 300; a weight kernel decryption block 404 that may be configured to receive and decrypt the weight 310 stored from the external memories; an output feature encryption block 406 that may be configured to encrypt convolution outputs from the CNN HW engine 200; [Note: Mody has been understood to encrypt input at an input layer, decrypt the input from the first layer, and then re-encrypt the output by another layer, this re-encryption by another layer understood to read on “the second cryptographic operation is an encryption operation”).
Motivation to combine same as stated for claim 1 above.
Regarding claim 3, Google in view of Mody teaches all of the limitations of claim 1, and the combination further teaches wherein a set of weights of the plurality of layers of the first portion represents a decryption key to decrypt encrypted input data (Mody [0005] concurrently teaches for the decryption of inputs and/or weights, and the encryption of the output, a particular key may be stored and used for the decryptions and encryptions as described herein.; Mody [0015] further teaches ass further described hereinbelow, the CNN algorithm may use on-the-fly decryption of input and coefficient filters (or weights), and on-the-fly encryption of a layer output by using a specific keys supplied for purposes of decryptions and encryptions.; Mody [0023] further teaches the key features block may provide different keys for each layer during the signal processing. The different keys may be used for the on-the-fly decryption of the input and weights, and the on-the-fly encryption of the output. The decryption keys for the weights may be fixed for each layer. Accordingly, for frame to frame processing, keys used for decryption of weights for each layer are fix.).
Motivation to combine same as stated for claim 1.
[EXAMINER NOTE: Examiner notes that Google [0039] teaches trusted decoder neural network 104 has access to the neural network input key 110 that was used to generate the encoded representation of the primary neural network input 112. The trusted decoder neural network 104 processes the primary neural network 108 along with the neural network input key to generate the first estimated reconstruction to the primary neural network input 114.]
Motivation to combine same as stated above for claim 1.
Regarding claim 4, Google in view of Mody teaches all of the limitations of claim 1, and the combination further teaches wherein a set of weights of the plurality of layers of the third portion represents an encryption key to encrypt the processed data (Mody [0005] concurrently teaches for the decryption of inputs and/or weights, and the encryption of the output, a particular key may be stored and used for the decryptions and encryptions as described herein.; Mody [0015] further teaches ass further described hereinbelow, the CNN algorithm may use on-the-fly decryption of input and coefficient filters (or weights), and on-the-fly encryption of a layer output by using a specific keys supplied for purposes of decryptions and encryptions.; Mody [0023] further teaches the key features block may provide different keys for each layer during the signal processing. The different keys may be used for the on-the-fly decryption of the input and weights, and the on-the-fly encryption of the output. The decryption keys for the weights may be fixed for each layer. Accordingly, for frame to frame processing, keys used for decryption of weights for each layer are fix.; Mody, Abstract and [0067] teaches a CNN based signal processing method (800) includes receiving of an encrypted output from a first layer of a multi-layer CNN data (802). The received encrypted output is subsequently decrypted to form a decrypted input to a second layer of the multi-layer CNN data (804). A convolution (808) of the decrypted input with a corresponding decrypted weight may generate a second layer output, which may be encrypted and used as an encrypted input to a third layer of the multi-layer CNN data [i.e., the third layer and the encrypted output have been understood as “the third portion”]).
Motivation to combine same as stated for claim 1.
Regarding claim 5, Google in view of Mody teaches all of the limitations of claim 1, and Google further teaches wherein the neural network is a modular neural network in which any of the first portion, second portion and third portion are substitutable (Google, [0078] teaches embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non transitory program carrier for execution by, or to control the operation of, data processing apparatus. [Note: [0078] has been understood as reading on the neural network can be modular]; Google [0089] further teaches the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. [Note: [0089] has been understood to read on the portions can be “substitutable”]).
Motivation to combine same as stated for claim 1 above.
Regarding claim 6, Google in view of Mody teaches all of the limitations of claim 1, and the combination further teaches wherein the third portion is to encrypt the processed data according to Advanced Encryption Standard, AES, and respective layers of the third portion are to perform encryption procedures according to the AES (Mody, [0049] teaches output, which may use a "key interface (IF)," from the key management 408 may include particular layer keys to blocks 402, 404 and 406 used for weights, input and output. Symmetrical encryption/decryption may be used, and it makes use of identical keys for encryption and decryption process. Therefore, the same key is preserved/provided by the key management 408. Symmetrical encryption may be used for large data (e.g., weight, input, and output). The algorithm that is used may be the Advanced Encryption Standard (AES).; Mody [0053] further teaches the AES channels 504 may implement secure decryption and encryption of the input, weights and layer output by using hardware functionalities, such as the CNN HW core 410. Accordingly, the input, weight and output that are being used in the AES channels 504 and the CNN HW core 410 are not visible to software (i.e., not accessible through software from outside of the SoC device 104).).
Motivation to combine same as stated for claim 1.
Regarding claim 8, Google teaches all of the limitations of claim 7, however Google does not distinctly disclose a third plurality of layers to convert an output of the processing into ciphertext.
Nevertheless, Mody teaches a third plurality of layers to convert an output of the processing into ciphertext (Mody, Abstract and [0067] teaches a CNN based signal processing method (800) includes receiving of an encrypted output from a first layer of a multi-layer CNN data (802). The received encrypted output is subsequently decrypted to form a decrypted input to a second layer of the multi-layer CNN data (804). A convolution (808) of the decrypted input with a corresponding decrypted weight may generate a second layer output, which may be encrypted and used as an encrypted input to a third layer of the multi-layer CNN data; Mody [0004] further teaches the CNN based signal processing may include receiving of an encrypted output from a layer (such as a first layer, a first hidden layer, etc.) of the multi-layer CNN data. The received encrypted output is subsequently decrypted to form a decrypted input to a subsequent layer (such as second layer, hidden layer, final output layer, etc.) of the multi-layer CNN data. A convolution of the decrypted input with a corresponding decrypted weight may generate a second hidden layer output, which may be encrypted and used as an encrypted input to another hidden layer of the multi-layer CNN data. After the signal processing of the layers of the multi-layer CNN data, the image classification may be generated as final output.; Mody [0005] teaches for the decryption of inputs and/or weights, and the encryption of the output, a particular key may be stored and used for the decryptions and encryptions as described herein.; Mody [0042] teaches FIG. 4 illustrates an example block diagram of the secure IP block 202 as described herein. As shown, the CNN secure IP block 202 may include: an input feature decryption block 402 that may be configured to receive and decrypt the input layer 300; a weight kernel decryption block 404 that may be configured to receive and decrypt the weight 310 stored from the external memories; an output feature encryption block 406 that may be configured to encrypt convolution outputs from the CNN HW engine 200; [Note: Mody has been understood to encrypt input at an input layer, decrypt the input from the first layer, and then re-encrypt the output by another layer, this re-encryption by another layer understood to read on “to convert an output of the processing into ciphertext”.)
Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to have modified the systems, methods, devices, and other techniques for training and using neural networks to encode inputs and to process encoded inputs, as taught by Google, to include the encrypted output of the secure convolutional neural network accelerator, as taught by Mody. Encrypting the output is implemented to prevent malicious attempts to provide a fixed pattern input to a given layer and allow the ability to decode the output and determine the weight of the given layer (and other layers).
Regarding claim 9, the combination of Google in view of Mody teaches all of the limitations of claim 8, and the combination further teaches wherein a set of weights of the third plurality of layers represents a key to convert the output of the processing into ciphertext (Mody [0005] concurrently teaches for the decryption of inputs and/or weights, and the encryption of the output, a particular key may be stored and used for the decryptions and encryptions as described herein.; Mody [0015] further teaches ass further described hereinbelow, the CNN algorithm may use on-the-fly decryption of input and coefficient filters (or weights), and on-the-fly encryption of a layer output by using a specific keys supplied for purposes of decryptions and encryptions.; Mody [0023] further teaches the key features block may provide different keys for each layer during the signal processing. The different keys may be used for the on-the-fly decryption of the input and weights, and the on-the-fly encryption of the output. The decryption keys for the weights may be fixed for each layer. Accordingly, for frame to frame processing, keys used for decryption of weights for each layer are fix.; Mody, Abstract and [0067] teaches a CNN based signal processing method (800) includes receiving of an encrypted output from a first layer of a multi-layer CNN data (802). The received encrypted output is subsequently decrypted to form a decrypted input to a second layer of the multi-layer CNN data (804). A convolution (808) of the decrypted input with a corresponding decrypted weight may generate a second layer output, which may be encrypted and used as an encrypted input to a third layer of the multi-layer CNN data; [Note: encrypted output being understood as “to convert the output of the processing into ciphertext”]).
Motivation to combine same as stated for claim 8 above.
Regarding claim 11, Google teaches all of the limitations of claim 7, however, Google does not distinctly disclose wherein a set of weights of the first plurality of layers represents a key to convert the ciphertext to plaintext.
Nevertheless, Mody teaches wherein a set of weights of the first plurality of layers represents a key to convert the ciphertext to plaintext (Mody [0015] further teaches ass further described hereinbelow, the CNN algorithm may use on-the-fly decryption of input and coefficient filters (or weights), and on-the-fly encryption of a layer output by using a specific keys supplied for purposes of decryptions and encryptions.; Mody [0023] further teaches the key features block may provide different keys for each layer during the signal processing. The different keys may be used for the on-the-fly decryption of the input and weights, and the on-the-fly encryption of the output. The decryption keys for the weights may be fixed for each layer. Accordingly, for frame to frame processing, keys used for decryption of weights for each layer are fix.; )
Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to have modified the systems, methods, devices, and other techniques for training and using neural networks to encode inputs and to process encoded inputs, as taught by Google, to include the encrypted output of the secure convolutional neural network accelerator, as taught by Mody. Encrypting the output is implemented to prevent malicious attempts to provide a fixed pattern input to a given layer and allow the ability to decode the output and determine the weight of the given layer (and other layers).
Regarding claim 12, Google in view of Mody teaches all of the limitations of claim 8, and the combination further teaches wherein the third plurality of layers is to encrypt the processed data according to Advanced Encryption Standard, AES, and respective layers of the third plurality of layers are to perform encryption procedures according to the AES (Mody, [0049] teaches output, which may use a "key interface (IF)," from the key management 408 may include particular layer keys to blocks 402, 404 and 406 used for weights, input and output. Symmetrical encryption/decryption may be used, and it makes use of identical keys for encryption and decryption process. Therefore, the same key is preserved/provided by the key management 408. Symmetrical encryption may be used for large data (e.g., weight, input, and output). The algorithm that is used may be the Advanced Encryption Standard (AES).; Mody [0053] further teaches the AES channels 504 may implement secure decryption and encryption of the input, weights and layer output by using hardware functionalities, such as the CNN HW core 410. Accordingly, the input, weight and output that are being used in the AES channels 504 and the CNN HW core 410 are not visible to software (i.e., not accessible through software from outside of the SoC device 104).).
Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to have modified the systems, methods, devices, and other techniques for training and using neural networks to encode inputs and to process encoded inputs, as taught by Google, to include the encrypted output of the secure convolutional neural network accelerator, as taught by Mody. Encrypting the output is implemented to prevent malicious attempts to provide a fixed pattern input to a given layer and allow the ability to decode the output and determine the weight of the given layer (and other layers).
Regarding claim 15, Google teaches all of the limitations of claim 13, however Google does not distinctly disclose wherein the encrypting is performed according to Advanced Encryption Standard, AES, and respective layers of the neural network perform encryption procedures according to the AES.
Nevertheless, Mody teaches wherein the encrypting is performed according to Advanced Encryption Standard, AES, and respective layers of the neural network perform encryption procedures according to the AES (Mody, [0050] teaches Symmetrical encryption/decryption may be used, and it makes use of identical keys for encryption and decryption process. Therefore, the same key is preserved/provided by the key management 408. Symmetrical encryption may be used for large data (e.g., weight, input, and output). The algorithm that is used may be the Advanced Encryption Standard (AES); Mody, [0051], further teaches FIG. 5 illustrates an example parallel execution of CNN based signal processing as described herein. As shown, a data interface 500 may supply a single data-stream of a multi-layer CNN data to a deserializer component 502. In turn, the deserializer component 502 may be configured to supply hidden layers of the multi-layer CNN data to Advanced Encryption Standard (AES) channels 504-2 to 504-N, where N may be a number of hidden layers to be processed by the CNN HW core 402. For each of AES channels 504-2 to 504-N, corresponding keys 506-1 to 506-N may be independently supplied for the decrypting of the input and weights as described herein. Also, for example, the keys 506-1 to 506-N may be stored in a memory that is external to the secure IP block 202.).
Before the effective filing date of the claimed invention it would have been obvious to one of ordinary skill in the art to have modified the systems, methods, devices, and other techniques for training and using neural networks to encode inputs and to process encoded inputs, as taught by Google, to include the encrypted output of the secure convolutional neural network accelerator, as taught by Mody. Encrypting the output is implemented to prevent malicious attempts to provide a fixed pattern input to a given layer and allow the ability to decode the output and determine the weight of the given layer (and other layers).
Conclusion
The following prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
GILAD-BACHRACH et al. (US 20160350648 A1) discloses Embodiments described herein are directed to methods and systems for performing neural network computations on encrypted data. Encrypted data is received from a user. The encrypted data is encrypted with an encryption scheme that allows for computations on the ciphertext to generate encrypted results data.
Gomez et al. (US 20200036510 A1) discloses systems and methods are provided for receiving input data to be processed by an encrypted neural network (NN) model, and encrypting the input data using a fully homomorphic encryption (FHE) public key associated with the encrypted NN model to generate encrypted input data. The systems and methods further provided for processing the encrypted input data to generate an encrypted inference output, using the encrypted NN model by, for each layer of a plurality of layers of the encrypted NN model, computing an encrypted weighted sum using encrypted parameters and a previous encrypted layer, the encrypted parameters comprising at least an encrypted weight and an encrypted bias, approximating an activation function for the level into a polynomial, and computing the approximated activation function on the encrypted weighted sum to generate an encrypted layer. The generated encrypted inference output is sent to a server system for decryption.
Xu et al., “CryptoNN: Training Neural Networks over Encrypted Data” (26 Apr. 2019) disclosing a privacy-preserving machine learning model with a functional encryption scheme.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEATRIZ RAMIREZ BRAVO whose telephone number is 571-272-2156. The examiner can normally be reached Mon. - Fri. 7:30a.m.-5:00p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, USMAAN SAEED can be reached at 571-272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/B.R.B./Examiner, Art Unit 2146
/USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146