Prosecution Insights
Last updated: April 19, 2026
Application No. 18/818,336

PROTECTION OF NEURAL NETWORKS BY OBFUSCATION OF ACTIVATION FUNCTIONS

Non-Final OA §101§103
Filed
Aug 28, 2024
Examiner
CHEEMA, ALI H
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
Cryptography Research Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
152 granted / 204 resolved
+16.5% vs TC avg
Strong +52% interview lift
Without
With
+51.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
212
Total Applications
across all art units

Statute-Specific Performance

§101
8.6%
-31.4% vs TC avg
§103
51.8%
+11.8% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 204 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to the application field on 08/28/2024. In which preliminary amendment filed on 10/10/2024 have been considered. In the preliminary amendment, claims 1-20 have been cancelled, new claims 21-40 have been added, claims 21, 30 and 37 are independent, and claims 21-40 are pending and being considered. Information Disclosure Statement The information disclosure statement (IDS) submitted on 08/28/2024 was filed on and/or after the mailing date of the application number 18/818,336 filed on 08/28/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification. Claim Objections Claims 24, 32 and 38 are objected to because of the following informalities: Regarding claim 24, the claim recites limitation “replacing the one or more weights with a composite of the one or more weights with a composite of the one or more weights and the de-obfuscation function.”. The claim repeats limitation “with a composite of the one or more weights with a composite of the one or more weights” which is a typographical error and therefore should be corrected and read as “replacing the one or more weights . Appropriate correction is required. Regarding claims 32 and 38, the claims are objected for the same reasons as mentioned above for the claim 24, since the claims 32 and 38 recite same limitation as mentioned above for the claim 24. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The claimed invention is not directed to patent eligible subject matter. Based upon consideration of all of the relevant factors with respect to the claim as a whole, claims 21-29 are determined to be directed to an abstract idea. Claims 21-29 are rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Under the 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”), effective January 7, 2019, claims 21-29 are directed to an abstract idea without being significantly more nor being integrated into a practical application. The claims are directed towards protection of neural networks by obfuscation of activation functions. Regarding independent claim 21, the claim recites method steps comprising “replacing a first activation function of a first neural node of the neural network with a second activation function, obtained by modifying the first activation function using an obfuscation function; and modifying one or more weights of a second neural node of the neural network using a de-obfuscation function selected to compensate for a modification of an output of the first neural node caused by the obfuscation function.”, as drafted, are directed to an abstract idea without being significantly more nor being integrated into a practical application. For instance, the claim limitation “replacing a first activation function of a first neural node of the neural network with a second activation function, obtained by modifying the first activation function using an obfuscation function; and” as drafted, falls under mathematical grouping of abstract ideas, in such that activation functions are mathematical functions used to introduce non-linearity. Changing one function to another by using an obfuscation function is a mathematical operation, which is a process of solving mathematical functions and/or equations by utilizing some additional physical steps, such as, a human using pen and paper to solve the mathematical functions/equations, but for the recitation of the generic computer components. Therefore, replacing a first activation function of a first neural node with a second activation function is generally considered an abstract mathematical concept or idea. If a claim limitation, under its broadest reasonable interpretation, falls under mathematical concepts, that is a process of solving mathematical functions/equations by utilizing some additional physical steps, such as, a human using pen and paper to solve the mathematical functions but for the recitation of the generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. the claim limitation “modifying one or more weights of a second neural node of the neural network using a de-obfuscation function selected to compensate for a modification of an output of the first neural node caused by the obfuscation function” as drafted, falls under mathematical grouping of abstract ideas, in such that modifying neural network weights using a de-obfuscation function to compensate for output modifications (obfuscation) is generally considered a method of data manipulation or mathematical optimization, which is typically classified as an abstract idea. Modifying neural network weights to achieve a desired output is essentially a mathematical concept. If a claim limitation, under its broadest reasonable interpretation, falls under mathematical concepts, and involves the process of mathematical optimization by utilizing some additional physical steps, such as, a human using pen and paper to perform the mathematical optimization but for the recitation of the generic computer components, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application because the claim recites additional elements, e.g., first neural node and a second neural node. These element(s) in the claim are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of “first neural node and a second neural node” within the claimed method steps “replacing a first activation function of a first neural node of the neural network with a second activation function, obtained by modifying the first activation function using an obfuscation function; and modifying one or more weights of a second neural node of the neural network using a de-obfuscation function selected to compensate for a modification of an output of the first neural node caused by the obfuscation function” amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Thus, the claim is not patent eligible. Further, the recited elements within dependent claim 22-29 taken individually do not amount to “significantly more” than just the abstract idea as previously identified above. Therefore, the claims do not amount to significantly more than the previously defined abstract idea. Some of the evidences of “significantly more” are a) improvement to another technology or field; b) applying judicial exception with or by a “particular machine’; c) transforming particular article/data into different state or thing; d) adding unconventional or non-routine steps, producing useful application; and e) other meaningful limitations beyond generic link to particular technological environment. As a result, the claims 21-29 are rejected under 35 U.S.C 101 as being directed to non-statutory subject matter as the claims do not contain any element or combination of elements that is sufficient to ensure that the patent in practice amounts to significantly more than a patent upon the ineligible concept itself. See Alice, 134 S. Ct. at 2360. Under Alice, that is not sufficient "to transform an abstract idea into a patent-eligible invention." Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “the processing device to: replace a first activation function...; modify one or more weights...; replace the one or more weights...; replace a set of parameters...; modify the first neural node...; use an unmasking vector...; cause the neural network to process...”, in claims 30-40. Examiner finds that the specification in Fig. 8 and associated para. [0084-0085] provides sufficient structure to perform functions by the “processing device” element as recited in the claimed limitations “the processing device to: replace a first activation function...; modify one or more weights...; replace the one or more weights...; replace a set of parameters...; modify the first neural node...; use an unmasking vector...; cause the neural network to process...” of claims 30-40. Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Therefore, these limitations as recited only invoke 35 U.S.C. 112 (f) or sixth paragraph. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 U.S.C. 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-23, 26, 29-31 and 36-37 are rejected under 35 U.S.C. 103 as being unpatentable over NI, Jian-jun (CN 102880880 A; hereinafter “NI”) in view of BATCHELOR, A (CN 111465940 A; hereinafter “Batchelor”) and further in view of Wiener; Michael (US 20160048689 A1; hereinafter “Wiener”). Regarding claim 21, NI teaches A method to obfuscate operations of a neural network, the method comprising: modifying one or more weights of a second neural node of the neural network using a de-obfuscation function selected to compensate for a modification of an output of the first neural node caused by the obfuscation function (NI in para. [0022 and 0055-0063], discloses a fuzzy neural network (FNN) input processing using obfuscation. The FNN is made up of 6 layers, which are an input layer L1, in which input vector is X = (X1, X2,. .., Xn], the number of the input layer node N1 is n, each component Xi of each node and the input vector of the layer connection, the weighting is 1 (weights); obfuscation layer L2, L3, L4 and L5 layer to form one common BP neural network, in the BP neural network layers L3, L4 and L5, the basic BP algorithm adjusts the network weight value, until the error control to e in the range of 5; and an output layer L6 for reverse obfuscation (e.g., de-obfuscation) processing in which the FNN output obtained by using the obfuscation layer is processed). However, NI fails to explicitly disclose but Batchelor teaches replacing a first activation function of a first neural node of the neural network with a second activation function (Batchelor in pdf page 4 (Contents of the Invention- paragraph 10-15), discloses that each neuron in the neural network has an associated activated function. One or more of the activation functions may be a rectification function which can be replaced with a sigmoid function. Or see also pdf page 10 (4th paragraph), discloses that the activation function at each neuron in the neural network is replaced by a differentiable activation function. Bachelor, in pdf page 3 (9th paragraph), also discloses that random weights are assigned to each of the neurons in the neural network [...]. An output error can be reduced by adjusting the weight of each neuron in the neural network), Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified ‘NI’ by incorporating the above features, as taught by Batchelor, such modification would provide a fully connected neural network, in which each neuron in the specific layer is connected to each of the neurons in the next layer, with an increased flexibility based on the activation functions; Batchelor in pdf page 2-3. However, NI as modified by Batchelor fails to explicitly disclose but Wiener teaches the second activation function obtained by modifying the first activation function using an obfuscation function (Wiener in para. [0043], FIG. 3 illustrates an encoded or obfuscated implementation 320 of the function X – this implementation 320 comprises an obfuscated function X′. In the implementation 320, the function X is obfuscated to form the function X′.); and Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified the combination of ‘NI as modified by Batchelor’ by incorporating the above features, as taught by Wiener, in such modification the obfuscated function X′ does not expose the data to an attacker and does not expose the processing or operation of the function X to an attacker, and thus prevent an attacker from being able to access or deduce the secret or sensitive data; Wiener in para. [0043 & 0097]. Regarding claim 22, NI as modified by Batchelor in view of Wiener teaches the method of claim 21, wherein NI as modified by Batchelor fails to explicitly disclose but Wiener further teaches the second activation function comprises a composite of the obfuscation function and the first activation function (Wiener in para. [0043], FIG. 3 illustrates an encoded or obfuscated implementation 320 of the function X – this implementation 320 comprises an obfuscated function X′. In the implementation 320, the function X is obfuscated to form the function X′ by using an input encoding F and an output encoding G. The obfuscated function X′ can be considered as: X′=G∘X∘F.sup.−1). Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified the combination of ‘NI as modified by Batchelor’ by incorporating the above features, as taught by Wiener, in such modification the obfuscated function X′ does not expose the data to an attacker and does not expose the processing or operation of the function X to an attacker, and thus prevent an attacker from being able to access or deduce the secret or sensitive data; Wiener in para. [0043 & 0097]. Regarding claim 23, NI as modified by Batchelor in view of Wiener teaches the method of claim 21, wherein NI further teaches the de-obfuscation function comprises an inverse of the obfuscation function (NI in para. [0055-0063] discloses that the FNN is made up of 6 layers, which are an [...] an output layer L6 for reverse obfuscation (e.g., de-obfuscation) processing in which the FNN output obtained by using the obfuscation layer is processed). Regarding claim 26, NI as modified by Batchelor in view of Wiener teaches the method of claim 21, wherein NI fails to explicitly disclose but Batchelor teaches the first activation function comprises at least one of: [[a step function,]] a rectified linear activation function, a sigmoid function, or [[a softmax function]] (Batchelor in pdf page 4 (Contents of the Invention- paragraph 10-15), discloses that each neuron in the neural network has an associated activated function. One or more of the activation functions may be a rectification function which can be replaced with a sigmoid function.). Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified ‘NI’ by incorporating the above features, as taught by Batchelor, such modification would provide a fully connected neural network, in which each neuron in the specific layer is connected to each of the neurons in the next layer, with an increased flexibility based on the activation functions; Batchelor in pdf page 2-3. Regarding claim 29, NI as modified by Batchelor in view of Wiener teaches the method of claim 21, wherein NI fails to explicitly disclose but Batchelor teaches further comprising: causing the neural network to process input data using [[the second activation function of the first neural node and]] the one or more modified weights of the second neural node (Batchelor in page 4 (12th paragraph), discloses that each connection between two neurons in the neuron may have an associated synaptic weight. each synaptic weight may have a value that has been adjusted, so as to reduce the error between the determined output classification and the associated known classification of the object from the training data set. The training data set comprises an associated known classification of the image data and the object of the object). Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified ‘NI’ by incorporating the above features, as taught by Batchelor, such modification would reduce the output error, so that the network by adjusting the weight of each neuron to learn correctly classifying the specific image/object; Batchelor in pdf page 3. Although, NI as modified by Batchelor, teaches causing the neural network to process input data using the one or more modified weights of the second neural node. However, NI as modified by Batchelor fails to explicitly disclose but Wiener further teaches causing the neural network to process input data using the second activation function of the first neural node (Wiener in para. [0043], FIG. 3 illustrates an encoded or obfuscated implementation 320 of the function X – this implementation 320 comprises an obfuscated function X′. In the implementation 320, the function X is obfuscated to form the function X′ by using an input encoding F and an output encoding G. The obfuscated function X′ receives or obtains an encoded representation F(d) of the input data d at, or via, an input 322 to the obfuscated function X′, processes the encoded representation F(d) to generate an encoded to representation G(X(d)) of the processed data X(d), and provides the encoded representation G(X(d)) via an output 328. The encoded representation F(d) is the data d encoded using the function F. The encoded representation G(X(d)) is the data X(d) encoded using the function G. The obfuscated function X′ can be considered as: X′=G∘X∘F.sup.−1). Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified the combination of ‘NI as modified by Batchelor’ by incorporating the above features, as taught by Wiener, in such modification the obfuscated function X′ does not expose the data to an attacker and does not expose the processing or operation of the function X to an attacker, and thus prevent an attacker from being able to access or deduce the secret or sensitive data; Wiener in para. [0043 & 0097]. Regarding independent claim 30, the claim has limitations similar to those treated in the above rejection(s) for method claim 21, and are met by the references as discussed above. Claim 30 however also recites the following limitations “A system comprising: a memory device; and a processing device communicatively coupled to the memory device, the processing device to:” which is disclosed in Para. [0088-0091] of the cited prior art Wiener. Regarding claim 31, NI as modified by Batchelor in view of Wiener teaches the system of claim 30, wherein NI further teaches the de-obfuscation function comprises an inverse of the obfuscation function (NI in para. [0055-0063] discloses that the FNN is made up of 6 layers, which are an [...] an output layer L6 for reverse obfuscation (e.g., de-obfuscation) processing in which the FNN output obtained by using the obfuscation layer is processed). However, NI as modified by Batchelor fails to explicitly disclose but Wiener further teaches wherein the second activation function comprises a composite of the obfuscation function and the first activation function (Wiener in para. [0043], FIG. 3 illustrates an encoded or obfuscated implementation 320 of the function X – this implementation 320 comprises an obfuscated function X′. In the implementation 320, the function X is obfuscated to form the function X′ by using an input encoding F and an output encoding G. The obfuscated function X′ can be considered as: X′=G∘X∘F.sup.−1). Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified the combination of ‘NI as modified by Batchelor’ by incorporating the above features, as taught by Wiener, in such modification the obfuscated function X′ does not expose the data to an attacker and does not expose the processing or operation of the function X to an attacker, and thus prevent an attacker from being able to access or deduce the secret or sensitive data; Wiener in para. [0043 & 0097]. Regarding claim 36, NI as modified by Batchelor in view of Wiener teaches the system of claim 30, wherein NI fails to explicitly disclose but Batchelor teaches further comprising: causing the neural network to process input data using [[the second activation function of the first neural node and]] the one or more modified weights of the second neural node (Batchelor in page 4 (12th paragraph), discloses that each connection between two neurons in the neuron may have an associated synaptic weight. each synaptic weight may have a value that has been adjusted, so as to reduce the error between the determined output classification and the associated known classification of the object from the training data set. The training data set comprises an associated known classification of the image data and the object of the object). Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified ‘NI’ by incorporating the above features, as taught by Batchelor, such modification would reduce the output error, so that the network by adjusting the weight of each neuron to learn correctly classifying the specific image/object; Batchelor in pdf page 3. Although, NI as modified by Batchelor, teaches causing the neural network to process input data using the one or more modified weights of the second neural node. However, NI as modified by Batchelor fails to explicitly disclose but Wiener further teaches causing the neural network to process input data using the second activation function of the first neural node (Wiener in para. [0043], FIG. 3 illustrates an encoded or obfuscated implementation 320 of the function X – this implementation 320 comprises an obfuscated function X′. In the implementation 320, the function X is obfuscated to form the function X′ by using an input encoding F and an output encoding G. The obfuscated function X′ receives or obtains an encoded representation F(d) of the input data d at, or via, an input 322 to the obfuscated function X′, processes the encoded representation F(d) to generate an encoded to representation G(X(d)) of the processed data X(d), and provides the encoded representation G(X(d)) via an output 328. The encoded representation F(d) is the data d encoded using the function F. The encoded representation G(X(d)) is the data X(d) encoded using the function G. The obfuscated function X′ can be considered as: X′=G∘X∘F.sup.−1). Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified the combination of ‘NI as modified by Batchelor’ by incorporating the above features, as taught by Wiener, in such modification the obfuscated function X′ does not expose the data to an attacker and does not expose the processing or operation of the function X to an attacker, and thus prevent an attacker from being able to access or deduce the secret or sensitive data; Wiener in para. [0043 & 0097]. Regarding independent claim 37, the claim has limitations similar to those treated in the above rejection(s) for method claim 21, and are met by the references as discussed above. Claim 30 however also recites the following limitations “A non-transitory computer-readable memory storing instructions that, when executed by a processing device, cause the processing device to perform operations comprising:” which is disclosed in Abstract and/or claim 21 of the cited prior art Wiener. Claims 24, 32 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over NI, Jian-jun (CN 102880880 A; hereinafter “NI”) in view of BATCHELOR, A (CN 111465940 A; hereinafter “Batchelor”) and further in view of Wiener; Michael (US 20160048689 A1; hereinafter “Wiener”) and FU, Jing-qi (CN 108614547 A; hereinafter “FU”). Regarding claim 24, NI as modified by Batchelor in view of Wiener teaches the method of claim 21, wherein NI as modified by Batchelor in view of Wiener fails to explicitly disclose but FU teaches modifying the one or more weights of the second neural node comprises: replacing the one or more weights with a composite of the one or more weights and the de-obfuscation function (FU in pdf page 4 (7th -10th paragraph) discloses the reverse obfuscation (e.g., de-obfuscation) process of the weights). Thus, it would have been obvious to one ordinary skilled in the art before the effective filling date of the claimed invention to have modified the combination of ‘NI as modified by Batchelor in view of Wiener’ by incorporating the above features, as taught by FU, such modification would obtain more accurate and reasonable evaluation results based on the reverse obfuscation and improves the accuracy of the industrial control protocol security; FU in Abstract. Allowable Subject Matter Claims 25, 27-28, 33-35 and 39-40 are objected to as being dependent upon a rejected claim, but would be allowable only if rewritten in independent form for including all of the limitations of the base claim and any intervening claims. With regards to claims 25, 33 and 39, the claims recite subject matter “replacing a set of parameters of the first neural node that determine an input into the first activation function with a set of expanded parameters modified using an application of a masking matrix, wherein the set of expanded parameters comprises: the set of parameters, and a set of dummy parameters; and wherein the second activation function is further obtained by modifying the first activation function using an unmasking vector selected to compensate for: the application of the masking matrix, and presence of the set of dummy parameters.” which is not disclosed in the closest cited prior art of record (as disclosed above and listed in form PTO-892), in such that, none of the closest cited prior art teaches or suggests, either taken alone or in combination, neither anticipates nor renders obvious, all of the claimed features presented in the claims. For this reason, the dependent claims would be allowable only if rewritten in independent form for including all of the limitations of the base claim and any intervening claims, and overcome the claim objections and rejections as set forth in this office action. With regards to claims 27, 34 and 40, the claims recite subject matter “modifying the first neural node with one or more dummy activation functions, wherein an individual dummy activation function of the one or more dummy activation functions is obtained by modifying the first activation function using at least one of: the obfuscation function, or an additional obfuscation function; and wherein modifying the one or more weights of the second neural node comprises: using an unmasking vector selected to eliminate one or more outputs of the one or more dummy activation functions.” which is not disclosed in the closest cited prior art of record (as disclosed above and listed in form PTO-892), in such that, none of the closest cited prior art teaches or suggests, either taken alone or in combination, neither anticipates nor renders obvious, all of the claimed features presented in the claims. For this reason, the dependent claims would be allowable only if rewritten in independent form for including all of the limitations of the base claim and any intervening claims, and overcome the claim objections and rejections as set forth in this office action. With regards to claims 28 and 35, the claims are allowable by virtue of their dependency on the allowable claims 27 and 34, respectively, as mentioned above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See form PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALI CHEEMA, whose contact number is 571-272-1239 and email: ali.cheema@uspto.gov. The examiner can normally be reached on Monday-Friday: 8:00AM – 4:00PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A. Shiferaw can be reached on 571-272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALI H. CHEEMA/ Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Aug 28, 2024
Application Filed
Jul 17, 2024
Response after Non-Final Action
Oct 07, 2024
Response after Non-Final Action
Oct 10, 2024
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602499
DATA ACCESS UNDER REGULATORY CONSTRAINTS
2y 5m to grant Granted Apr 14, 2026
Patent 12602490
AUTOMATED BACK-PROPAGATION OF A FIX USING A REPRODUCIBLE BUILD, TEST, AND VALIDATION PROCESS TO CREATE A PATCHED ARTIFACT
2y 5m to grant Granted Apr 14, 2026
Patent 12591712
DATA CONFIDENCE GRAPHS
2y 5m to grant Granted Mar 31, 2026
Patent 12587843
A USER EQUIPMENT, AND METHOD FOR RECEIVING AN EXTENSIBLE AUTHENTICATION PROTOCOL (EAP) IDENTITY REQUEST
2y 5m to grant Granted Mar 24, 2026
Patent 12586480
ELECTRONIC DOCUMENT PRESENTATION MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+51.7%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 204 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month