Prosecution Insights
Last updated: April 19, 2026
Application No. 18/178,684

ACTIVATION FUNCTION FOR HOMOMORPHICALLY-ENCRYPTED NEURAL NETWORKS

Non-Final OA §101§102
Filed
Mar 06, 2023
Examiner
GRUSZKA, DANIEL PATRICK
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Intel Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
38.3%
-1.7% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1-20 are within the four statutory (a process, machine, manufacture or composition of matter.) Claims 1-15 describe a machine and 16-20 describes a process. With respect to claim 1: Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG. analyze the activation statistics to identify one or more top activations in the activation statistics, the activation statistics corresponding to activations by activation functions of the neural network; (This is an abstract idea of a "Mental Process." The "identify" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The identification could be done manually by an individual.) Step 2A Prong 2: The judicial exception is not integrated into a practical application. Additional elements: collect activation statistics for a neural network of a trained model; (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). save the one or more top activations as saved activations; and (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). output the saved activations and activation parameters as input for an activation function of the trained model. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional elements add insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 1 is ineligible. With respect to claim 2: Step 2A Prong 1: claim 2, which incorporates the rejection of claim 1, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the activation statistics comprise a percentage of activations of a neuron of the neural network by a previous activation function of the neuron, and wherein the activation statistics are collected for each output feature map of each layer of the neural network. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 2 is ineligible. With respect to claim 3: Step 2A Prong 1: claim 3, which incorporates the rejection of claim 2, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the activation statistics are collected in matrices corresponding to output feature maps of the neural network, where the matrices correspond to index locations in the output feature maps. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 3 is ineligible. With respect to claim 4: Step 2A Prong 1: claim 4, which incorporates the rejection of claim 3, recites an additional abstract idea: analyzing the activation statistics comprises, for each matrix of the matrices, identifying a first percent of the activation statistics having a highest number of activations in the matrix. (This is an abstract idea of a "Mental Process." The "identifying" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The identification could be made manually by an individual.) Step 2A Prong 2: claim 4 does not recite any additional elements and thus cannot be integrated into a practical application. Step 2B: claim 4 does not recite an additional element. Therefore, claim 4 is ineligible. With respect to claim 5: Step 2A Prong 1: claim 5, which incorporates the rejection of claim 4, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the one or more processors are to save index locations corresponding to the first percent of the activation statistics as the one or more top activations. (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)). Therefore, claim 5 is ineligible. With respect to claim 6: Step 2A Prong 1: claim 6, which incorporates the rejection of claim 5, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the first percent of the activation statistics that are fixed activations by the activation functions during an inference phase that deploys the trained model; and (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). a second percent of the activation statistics to randomly initialize by the activation functions during the inference phase that deploys the trained model. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional elements add insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 6 is ineligible. With respect to claim 7: Step 2A Prong 1: claim 7, which incorporates the rejection of claim 6, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the first percent and the second percent are automatically tuned during a training phase that trains the trained model. (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)). Therefore, claim 7 is ineligible. With respect to claim 8: Step 2A Prong 1: claim 8, which incorporates the rejection of claim 6, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the saved activations are output using a matrix data structure. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 8 is ineligible. With respect to claim 9: Step 2A Prong 1: claim 9, which incorporates the rejection of claim 3, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the activation parameters differ for one or more of the output feature maps of the neural network. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 9 is ineligible. With respect to claim 10: Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG. applying a filter to the input data at the neuron, the filter to generate filter output; (This is an abstract idea of a "Mental Process." The "filter" step under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind. The filtering could be made manually by an individual.) Step 2A Prong 2: The judicial exception is not integrated into a practical application. Additional elements: receiving input data at a neuron of a neural network of a trained model, the input data received during an inference phase deploying the trained model; (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). applying an activation function to the filter output, the activation function comprising a matrix to apply to an output feature map of the neuron, the matrix generated based on saved fixed activated index locations of the output feature map and based on saved activation parameters identifying a percent of randomly-activated index locations of the output feature map; and (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.) output the output feature map from the neuron based on application of the activation function to the filter output. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional elements “receiving…” and “output…” add insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). The additional element “applying an…” is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)). Therefore, claim 10 is ineligible. With respect to claim 11: Step 2A Prong 1: claim 11, which incorporates the rejection of claim 10, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the matrix is generated from activation statistics that are collected from applying training data to the trained model prior to deployment of the trained model, wherein the activation statistics comprise a percentage of activations of a neuron of the neural network by a previous activation function of the neuron, and wherein the activation statistics are collected for each output feature map of each layer of the neural network. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 11 is ineligible. With respect to claim 12: Step 2A Prong 1: claim 12, which incorporates the rejection of claim 11, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the activation statistics are collected in matrices corresponding to output feature maps of the neural network, where the matrices correspond to index locations in the output feature maps. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 12 is ineligible. With respect to claim 13: Step 2A Prong 1: claim 13, which incorporates the rejection of claim 12, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the saved fixed activated index locations comprise, for each matrix of the matrices, a first percent of the activation statistics having a highest number of activations in the matrix. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 13 is ineligible. With respect to claim 14: Step 2A Prong 1: claim 14, which incorporates the rejection of claim 13, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the saved activation parameters are automatically tuned during a training phase that trains the trained model. (This amounts to no more than mere instructions to “apply” the exception using a generic computer component.) Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element is recited in a generic level and they represent generic computer components to apply the abstract idea. Mere instructions to apply an exception cannot provide an inventive concept (MPEP 2106.05(f)). Therefore, claim 14 is ineligible. With respect to claim 15: Step 2A Prong 1: claim 15, which incorporates the rejection of claim 12, does not recite an abstract idea. Step 2A Prong 2: The judicial exception is not integrated into a practical application. the saved activation parameters differ for one or more of the output feature maps of the neural network. (this limitation amounts to adding insignificant extra-solution activity to the judicial exception). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception The additional element adds insignificant extra-solution activity to the judicial exception and cannot provide an inventive concept. Storing and retrieving information in memory is directed to a well understood routine conventional activity of data transmission (MPEP 2106.05(d)(II)(iv)). Therefore, claim 15 is ineligible. With respect to claim 16: The claim recites similar limitations as corresponding to claim 1. Therefore, the same subject matter analysis that was utilized for claim 1, as described above, is equally applicable to claim 16. Therefore, claim 16 is ineligible. With respect to claim 17: The claim recites similar limitations as corresponding to claim 2. Therefore, the same subject matter analysis that was utilized for claim 2, as described above, is equally applicable to claim 17. Therefore, claim 17 is ineligible. With respect to claim 18: The claim recites similar limitations as corresponding to claim 3 & 4. Therefore, the same subject matter analysis that was utilized for claim 3 & 4, as described above, is equally applicable to claim 18. Therefore, claim 18 is ineligible. With respect to claim 19: The claim recites similar limitations as corresponding to claim 6. Therefore, the same subject matter analysis that was utilized for claim 6, as described above, is equally applicable to claim 19. Therefore, claim 19 is ineligible. With respect to claim 20: The claim recites similar limitations as corresponding to claim 9. Therefore, the same subject matter analysis that was utilized for claim 9, as described above, is equally applicable to claim 20. Therefore, claim 20 is ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pogorelik (US 20210319098 A1). Regarding claim 1, Pogorelik teaches: An apparatus comprising: one or more processors to: ([0430] “A hardened system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.”) collect activation statistics for a neural network of a trained model; (0400] “As will be described in more detail below, circuitry 6210 can execute instructions 6228 to perform one or more of gathering activation statistics 6228, analyzing the activation statistics 6228, and processing at inference time. For example, first, circuitry 6210 can execute instructions 6228 to take inference model 6222 (e.g., a trained deep neural network (DNN) model) and the input data 6224 (e.g., a training dataset with samples) and initializing an empty activation matrix 6229 that corresponds with each output feature map 6225 for all the layers.”) analyze the activation statistics to identify one or more top activations in the activation statistics, the activation statistics corresponding to activations by activation functions of the neural network; ([0401] “In some examples, the statistics may be analyzed based on one or more parameters 6227. Sometimes, the one or more parameters 6227 may be provided by a user. Oftentimes, the one or more parameters 6227 may include first, second, and third parameters. The first parameter may include a percent of which of the fired activations are to be fixed. The second parameter may include the total percent of neurons that should be fired. The third parameter may include the percentile of the total top activations to take the fixed first parameter indices from.”) save the one or more top activations as saved activations; and ([0401] “Circuitry 6210 may execute instructions 6228 to scan each activation map 6231 and the indices of the indices of random first parameter percent chosen from the top activated neurons (e.g., the most-activated feature maps' values) are saved.”) output the saved activations and activation parameters as input for an activation function of the trained model. ([0402] “The actual activation operation at inference time may be composed of passing all neurons that are in the corresponding saved locations map (e.g., activation maps 6231) for the processed layer.”) Regarding claim 2, Pogorelik teaches claim 1 as outlined above. Pogorelik further teaches: the activation statistics comprise a percentage of activations of a neuron of the neural network by a previous activation function of the neuron, and wherein the activation statistics are collected for each output feature map of each layer of the neural network. ([0400] “Typically, each training sample in the input data 6224 may be forward-propagated while all the activations for all the layers are accumulated in the corresponding activation matrix 6229 to produce activation maps 6231. In various examples, activation statistics 6223 may include the completed activation matrices 6229 and/or activation maps.”) Regarding claim 3, Pogorelik teaches claim 2 as outlined above. Pogorelik further teaches: the activation statistics are collected in matrices corresponding to output feature maps of the neural network, where the matrices correspond to index locations in the output feature maps. ([0405] “accumulate activations of samples in a dataset into the one or more matrices to produce activation statistics” activations of samples in a dataset may be accumulated into corresponding matrices to produce activation statistics. For instance, each activation of each sample may be accumulated into activation matrices 6229. In some examples, each sample may be forward-propagated while all of the activation maps for all the layers are saved.”) Regarding claim 4, Pogorelik teaches claim 3 as outlined above. Pogorelik further teaches: analyzing the activation statistics comprises, for each matrix of the matrices, identifying a first percent of the activation statistics having a highest number of activations in the matrix. ([0406] “the third parameter includes a percentile of total top activations to take fixed activations from”) Regarding claim 5, Pogorelik teaches claim 4 as outlined above. Pogorelik further teaches: the one or more processors are to save index locations corresponding to the first percent of the activation statistics as the one or more top activations. ([0409] “Proceeding to block 6404, “configuration: fixed locations & percent of output neurons to fire” one or more configuration parameters including fixed locations and percent of output neurons to fire may be determined, implemented, and/or configured. For instance, instructions 6228 may determine/configure fixed locations and percent of output neurons based on one or more of input data 6224, inference model 6222, activation statistics 6223, output feature maps 6225, parameters 6227, output data 6226, and activation maps 6231.”) Regarding claim 6, Pogorelik teaches claim 5 as outlined above. Pogorelik further teaches: the first percent of the activation statistics that are fixed activations by the activation functions during an inference phase that deploys the trained model; and a second percent of the activation statistics to randomly initialize by the activation functions during the inference phase that deploys the trained model. ([0410] “At block 6406 “fire the neurons at the fixed locations plus a random percent of neourons” the neurons at the fixed locations and random percent of neurons, or a percent of random neurons. For example, the random percent of neurons may be determined based on one or more of parameter 6227. In some examples, one or more activation functions described herein may be used for neural networks encrypted with multi-party computation (e.g., MPC schemes and/or homomorphic encryption, or the like).”) Regarding claim 7, Pogorelik teaches claim 6 as outlined above. Pogorelik further teaches: the first percent and the second percent are automatically tuned during a training phase that trains the trained model. ([0402] “Further, negligible overhead may be needed for model storage, while accuracy is preserved, and it may run orders of magnitude faster than approximation-based activations. Also, this method may be applied post-hoc to already trained models and doesn't require fine tuning.”) Regarding claim 8, Pogorelik teaches claim 6 as outlined above. Pogorelik further teaches: the saved activations are output using a matrix data structure. ([0404] “At block 6302 “initialize one or more matrices corresponding to an output feature map for each layer in an inference model” one or more matrices corresponding to an output feature map for each layer in an inference model may be initialized. For example, one or more activation matrices 6229 that correspond to one or more output feature maps 6225 may be initialized.”) Regarding claim 9, Pogorelik teaches claim 3 as outlined above. Pogorelik further teaches: the activation parameters differ for one or more of the output feature maps of the neural network. ([0409] “Logic flow 6400 may begin at block 6402. At block 6402 “load saved fixed activations” fixed activations that are saved may be loaded. For example, fixed activations in one or more of activations matrices 6229 and activations maps 6231 may be loaded. Proceeding to block 6404, “configuration: fixed locations & percent of output neurons to fire” one or more configuration parameters including fixed locations and percent of output neurons to fire may be determined, implemented, and/or configured. For instance, instructions 6228 may determine/configure fixed locations and percent of output neurons based on one or more of input data 6224, inference model 6222, activation statistics 6223, output feature maps 6225, parameters 6227, output data 6226, and activation maps 6231.”) Regarding claim 10, Pogorelik teaches: A non-transitory computer-readable storage medium having stored thereon executable computer program instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: ([0427] “In some examples, the non-transitory storage medium may include one or more types of computer-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth”). receiving input data at a neuron of a neural network of a trained model, the input data received during an inference phase deploying the trained model; ([0400] “For example, first, circuitry 6210 can execute instructions 6228 to take inference model 6222 (e.g., a trained deep neural network (DNN) model) and the input data 6224 (e.g., a training dataset with samples)”). applying a filter to the input data at the neuron, the filter to generate filter output; ([0189] “For example, circuitry 1810-5 can execute instructions 1828-5 decode, filter, and encode raw input data 1824 to generate sanitized input data 1825.”). applying an activation function to the filter output, the activation function comprising a matrix to apply to an output feature map of the neuron, the matrix generated based on saved fixed activated index locations of the output feature map and based on saved activation parameters identifying a percent of randomly-activated index locations of the output feature map; and ([0407] “For instance, a dedicated activation function may be implemented based on parameters 6227 and the activation statistics 6223. In such instances, the activation statistics 6223 may include activation matrices 6229 and/or activation maps 6231.” And [0401] “Circuitry 6210 may execute instructions 6228 to scan each activation map 6231 and the indices of the indices of random first parameter percent chosen from the top activated neurons (e.g., the most-activated feature maps' values) are saved.”) output the output feature map from the neuron based on application of the activation function to the filter output. ([0404] “one or more matrices corresponding to an output feature map for each layer in an inference model may be initialized. For example, one or more activation matrices 6229 that correspond to one or more output feature maps 6225 may be initialized.”) Regarding claim 11, Pogorelik teaches claim 10 as outlined above. Pogorelik further teaches: the matrix is generated from activation statistics that are collected from applying training data to the trained model prior to deployment of the trained model, wherein the activation statistics comprise a percentage of activations of a neuron of the neural network by a previous activation function of the neuron, and wherein the activation statistics are collected for each output feature map of each layer of the neural network. ([0400] “Typically, each training sample in the input data 6224 may be forward-propagated while all the activations for all the layers are accumulated in the corresponding activation matrix 6229 to produce activation maps 6231. In various examples, activation statistics 6223 may include the completed activation matrices 6229 and/or activation maps.”) Regarding claim 12, Pogorelik teaches claim 11 as outlined above. Pogorelik further teaches: the activation statistics are collected in matrices corresponding to output feature maps of the neural network, where the matrices correspond to index locations in the output feature maps. ([0409] “For instance, instructions 6228 may determine/configure fixed locations and percent of output neurons based on one or more of input data 6224, inference model 6222, activation statistics 6223, output feature maps 6225, parameters 6227, output data 6226, and activation maps 6231.”) Regarding claim 13, Pogorelik teaches claim 12 as outlined above. Pogorelik further teaches: the saved fixed activated index locations comprise, for each matrix of the matrices, a first percent of the activation statistics having a highest number of activations in the matrix. ([0406] “the third parameter includes a percentile of total top activations to take fixed activations from”) Regarding claim 14, Pogorelik teaches claim 13 as outlined above. Pogorelik further teaches: the saved activation parameters are automatically tuned during a training phase that trains the trained model. ([0402] “Further, negligible overhead may be needed for model storage, while accuracy is preserved, and it may run orders of magnitude faster than approximation-based activations. Also, this method may be applied post-hoc to already trained models and doesn't require fine tuning.”) Regarding claim 15, Pogorelik teaches claim 12 as outlined above. Pogorelik further teaches: the saved activation parameters differ for one or more of the output feature maps of the neural network. ([0409] “Logic flow 6400 may begin at block 6402. At block 6402 “load saved fixed activations” fixed activations that are saved may be loaded. For example, fixed activations in one or more of activations matrices 6229 and activations maps 6231 may be loaded. Proceeding to block 6404, “configuration: fixed locations & percent of output neurons to fire” one or more configuration parameters including fixed locations and percent of output neurons to fire may be determined, implemented, and/or configured. For instance, instructions 6228 may determine/configure fixed locations and percent of output neurons based on one or more of input data 6224, inference model 6222, activation statistics 6223, output feature maps 6225, parameters 6227, output data 6226, and activation maps 6231.”). Regarding claim 16, Pogorelik teaches: A method comprising: ([0402] “This method can enable efficient activations, yet no computation is done during inference time on the data itself (hence data agnostic). Further, negligible overhead may be needed for model storage, while accuracy is preserved, and it may run orders of magnitude faster than approximation-based activations. Also, this method may be applied post-hoc to already trained models and doesn't require fine tuning.”) collect activation statistics for a neural network of a trained model; (0400] “As will be described in more detail below, circuitry 6210 can execute instructions 6228 to perform one or more of gathering activation statistics 6228, analyzing the activation statistics 6228, and processing at inference time. For example, first, circuitry 6210 can execute instructions 6228 to take inference model 6222 (e.g., a trained deep neural network (DNN) model) and the input data 6224 (e.g., a training dataset with samples) and initializing an empty activation matrix 6229 that corresponds with each output feature map 6225 for all the layers.”) analyzing the activation statistics to identify one or more top activations in the activation statistics, the activation statistics corresponding to activations by activation functions of the neural network; ([0401] “In some examples, the statistics may be analyzed based on one or more parameters 6227. Sometimes, the one or more parameters 6227 may be provided by a user. Oftentimes, the one or more parameters 6227 may include first, second, and third parameters. The first parameter may include a percent of which of the fired activations are to be fixed. The second parameter may include the total percent of neurons that should be fired. The third parameter may include the percentile of the total top activations to take the fixed first parameter indices from.”) saving the one or more top activations as saved activations; and ([0401] “Circuitry 6210 may execute instructions 6228 to scan each activation map 6231 and the indices of the indices of random first parameter percent chosen from the top activated neurons (e.g., the most-activated feature maps' values) are saved.”) outputting the saved activations and activation parameters as input for an activation function of the trained model. ([0402] “The actual activation operation at inference time may be composed of passing all neurons that are in the corresponding saved locations map (e.g., activation maps 6231) for the processed layer.”) Regarding claim 17, Pogorelik teaches claim 16 as outlined above. Pogorelik further teaches: the activation statistics comprise a percentage of activations of a neuron of the neural network by a previous activation function of the neuron, and wherein the activation statistics are collected for each output feature map of each layer of the neural network. ([0400] “Typically, each training sample in the input data 6224 may be forward-propagated while all the activations for all the layers are accumulated in the corresponding activation matrix 6229 to produce activation maps 6231. In various examples, activation statistics 6223 may include the completed activation matrices 6229 and/or activation maps.”) Regarding claim 18, Pogorelik teaches claim 17 as outlined above. Pogorelik further teaches: the activation statistics are collected in matrices corresponding to output feature maps of the neural network, where the matrices correspond to index locations in the output feature maps, and wherein analyzing the activation statistics comprises, for each matrix of the matrices, identifying a first percent of the activation statistics having a highest number of activations in the matrix. ([0405] “accumulate activations of samples in a dataset into the one or more matrices to produce activation statistics” activations of samples in a dataset may be accumulated into corresponding matrices to produce activation statistics. For instance, each activation of each sample may be accumulated into activation matrices 6229. In some examples, each sample may be forward-propagated while all of the activation maps for all the layers are saved.” and [0406] “the third parameter includes a percentile of total top activations to take fixed activations from”) Regarding claim 19, Pogorelik teaches claim 18 as outlined above. Pogorelik further teaches: the first percent of the activation statistics that are fixed activations by the activation functions during an inference phase that deploys the trained model; and a second percent of the activation statistics to randomly initialize by the activation functions during the inference phase that deploys the trained model. ([0410] “At block 6406 “fire the neurons at the fixed locations plus a random percent of neourons” the neurons at the fixed locations and random percent of neurons, or a percent of random neurons. For example, the random percent of neurons may be determined based on one or more of parameter 6227. In some examples, one or more activation functions described herein may be used for neural networks encrypted with multi-party computation (e.g., MPC schemes and/or homomorphic encryption, or the like).”) Regarding claim 20, Pogorelik teaches claim 17 as outlined above. Pogorelik further teaches: the activation parameters differ for one or more of the output feature maps of the neural network. ([0409] “Logic flow 6400 may begin at block 6402. At block 6402 “load saved fixed activations” fixed activations that are saved may be loaded. For example, fixed activations in one or more of activations matrices 6229 and activations maps 6231 may be loaded. Proceeding to block 6404, “configuration: fixed locations & percent of output neurons to fire” one or more configuration parameters including fixed locations and percent of output neurons to fire may be determined, implemented, and/or configured. For instance, instructions 6228 may determine/configure fixed locations and percent of output neurons based on one or more of input data 6224, inference model 6222, activation statistics 6223, output feature maps 6225, parameters 6227, output data 6226, and activation maps 6231.”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL PATRICK GRUSZKA whose telephone number is (571)272-5259. The examiner can normally be reached M-F 9:00 AM - 6:00 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL GRUSZKA/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Mar 06, 2023
Application Filed
Nov 13, 2025
Non-Final Rejection — §101, §102 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month