Prosecution Insights
Last updated: April 19, 2026
Application No. 19/301,717

Logic Gate Networks Generated Using Differentiable Logic Gate Models

Final Rejection §101§103
Filed
Aug 15, 2025
Examiner
VAUGHN, RYAN C
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
UNIVERSITÄT KONSTANZ
OA Round
2 (Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
81%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
145 granted / 235 resolved
+6.7% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
45 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
23.9%
-16.1% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-30 are presented for examination. Response to Amendment Applicant’s amendment appears to have overcome the specification objections and the rejections under 35 USC § 112. Therefore, those objections and rejections are withdrawn. Claim Objections Claim 26 is objected to because of the following informalities: “data is” should be “data are”. Claims 27-30 are objected to for dependency on claim 26. Appropriate correction is required. Claim Rejections - 35 USC § 101 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 1 Step 1: The claim recites a method; therefore, it is directed to the statutory category of processes. Step 2A Prong 1: The claim recites, inter alia: [I]teratively training the logic gate network via a plurality of training iterations, each training iteration including: forward-propagating a batch of the input vectors through the logic gate network to generate a training network output by, for each node, computing a differentiable output that is a function of the outputs of the potential logic gate operators of the respective node according to current differentiable parameters thereof: The training limitation here recites specific mathematical operations such as “computing a differentiable output that is a function of the outputs of the potential logic gate operators of the respective node”. Thus, this limitation is directed to a mathematical concept. Compare July 2024 § 101 Examples, Example 47, claim 2. [C]omputing a loss value that quantifies a difference between the training network output and the corresponding target output values: Computing a loss value by computing a difference between actual and target output values is a mathematical concept. [D]etermining, via a training optimization algorithm, updated differentiable parameters for at least one node: Determining updated parameters via optimization is a mathematical concept. [S]electing, after completion of the plurality of training iterations, for each of at least some of the plurality of nodes, a single logic gate operator from the predefined finite set of potential logic gate operators based on the differentiable parameters of the respective node: This limitation could encompass mentally selecting a logic gate operator for the node by observing the node’s parameters. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites: [R]eceiving, at a computing system, a training data set of input vectors and corresponding target output values: This limitation is directed to the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). [I]nstantiating, in a memory of the computing system, an untrained logic gate network with a plurality of nodes, wherein each node is parameterized by a set of differentiable parameters corresponding to a predefined finite set of potential logic gate operators: This limitation could be regarded as a form of data gathering, which is insignificant extra-solution activity. MPEP § 2106.05(g). The claim further recites “applying the updated differentiable parameters to at least one node” and that the identifying is performed as part of “generating, based on the selecting, a physical fixed logic gate network based on the selected single logic gate operators for at least some of the plurality of nodes”. However, these limitations merely restrict the judicial exception to the field of use of fixed physical network creation. MPEP § 2106.05(h).1 Step 2B: The claim does not contain significantly more than the judicial exception. The receiving limitation is directed to the well-understood, routine, and conventional activity of receiving or transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). The instantiating limitation is directed to the well-understood, routine, and conventional activity of storing or retrieving information in memory. Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Otherwise, the analysis at this step mirrors that of step 2A, prong 1. As an ordered whole, the claim is directed to a mathematical process of training a network, at least some steps of which may be performed mentally. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 2 Step 1: A process, as above. Step 2A Prong 1: The claim recites “aggregating groups of outputs through summation to obtain scores for use in computing the loss values.” This limitation recites a mathematical concept. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis. Claim 3 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exceptions as in claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “training the logic gate network further comprises training two or more directly successive layers of nodes positioned directly before a summation of bits.” However, this limitation merely restricts the judicial exception to the field of use of model training. MPEP § 2106.05(h). Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “training the logic gate network further comprises training two or more directly successive layers of nodes positioned directly before a summation of bits.” However, this limitation merely restricts the judicial exception to the field of use of model training. MPEP § 2106.05(h). Claim 4 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exceptions as in claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that “each node in the untrained logic gate network receives two input signals and produces a single output signal.” Instantiating such a network remains insignificant extra-solution activity under these further assumptions for the same reasons as given in the rejection of claim 1. Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that “each node in the untrained logic gate network receives two input signals and produces a single output signal.” Instantiating such a network remains well-understood, routine, and conventional activity under these further assumptions for the same reasons as given in the rejection of claim 1. Claim 5 Step 1: A process, as above. Step 2A Prong 1: The claim recites that “the set of potential logic gate operators includes at least one of AND logic gate operators, OR logic gate operators, NAND logic gate operators, NOR logic gate operators, and XOR logic gate operators.” Computing the outputs using the potential logic gate operators and identifying a single gate among the operators remain abstract ideas under these further assumptions. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis. Claim 6 Step 1: A process, as above. Step 2A Prong 1: The claim recites that “the set of potential logic gate operators includes at least one of constant TRUE logic gate operators, constant FALSE logic gate operators, and inverter logic gate operators.” Computing the outputs using the potential logic gate operators and identifying a single gate among the operators remain abstract ideas under these further assumptions. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis. Claim 7 Step 1: A process, as above. Step 2A Prong 1: The claim recites that “the set of potential logic gate operators includes direct connections.” Computing the outputs using the potential logic gate operators and identifying a single gate among the operators remain abstract ideas under these further assumptions. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis. Claim 8 Step 1: A process, as above. Step 2A Prong 1: The claim recites “calculating gradients of the loss value with respect to the differentiable parameters of the nodes; and using an optimization algorithm based on gradient-descent to determine the updated differentiable parameters.” These are both mathematical concepts. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis. Claim 9 Step 1: A process, as above. Step 2A Prong 1: The claim recites that “the gradient-descent optimization computes the gradients by performing backpropagation through the logic gate network.” This is a mathematical concept. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 8 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 8 analysis. Claim 10 Step 1: A process, as above. Step 2A Prong 1: The claim recites that “the set of potential logic gate operators includes at least one direct connection.” Computing the outputs using the potential logic gate operators and identifying a single gate among the operators remain abstract ideas under these further assumptions. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 9 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 9 analysis. Claim 11 Step 1: A process, as above. Step 2A Prong 1: The claim recites that “the set of potential logic gate operators includes at least one of AND logic gate operators, OR logic gate operators, NAND logic gate operators, NOR logic gate operators, and XOR logic gate operators.” Computing the outputs using the potential logic gate operators and identifying a single gate among the operators remain abstract ideas under these further assumptions. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 10 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 10 analysis. Claim 12 Step 1: A process, as above. Step 2A Prong 1: The claim recites that “the set of potential logic gate operators includes at least one of constant TRUE logic gate operators, constant FALSE logic gate operators, and inverter logic gate operators.” Computing the outputs using the potential logic gate operators and identifying a single gate among the operators remain abstract ideas under these further assumptions. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 11 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 11 analysis. Claim 13 Step 1: A process, as above. Step 2A Prong 1: The claim recites, inter alia, “simplifying a logical expression or logical sub-expressions of the fixed logic gate network.” This is a mathematical concept that could be performed mentally given a sufficiently simple expression. Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis. Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis. Claim 14 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exceptions as in claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “storing, in a non-transitory computer-readable medium, data defining the fixed logic gate network for subsequent use with other input vectors.” This limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “storing, in a non-transitory computer-readable medium, data defining the fixed logic gate network for subsequent use with other input vectors.” This limitation recites the well-understood, routine, and conventional activity of storing and retrieving information in memory. MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Claim 15 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exceptions as in claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “implementing a logical expression of the fixed logic gate network in a field- programmable gate array (FPGA) for subsequent use with other input vectors.” This limitation amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “implementing a logical expression of the fixed logic gate network in a field- programmable gate array (FPGA) for subsequent use with other input vectors.” This limitation amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). Claim 16 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exceptions as in claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “implementing a logical expression of the fixed logic gate network in an application-specific integrated circuit (ASIC) for subsequent use with other input vectors.” This limitation amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “implementing a logical expression of the fixed logic gate network in an application-specific integrated circuit (ASIC) for subsequent use with other input vectors.” This limitation amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). Claim 17 Step 1: A process, as above. Step 2A Prong 1: The claim recites the same judicial exceptions as in claim 1. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “implementing a logical expression of the fixed logic gate network in an application-specific integrated circuit (ASIC) customized to function as a tensor processing unit (TPU).” This limitation amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “implementing a logical expression of the fixed logic gate network in an application-specific integrated circuit (ASIC) customized to function as a tensor processing unit (TPU).” This limitation amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). Claim Rejections - 35 USC § 103 Claims 1-2, 5-6, 8-9, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Teig et al. (US 10867247) (“Teig”) in view of Kajino (US 20220374701) (“Kajino”) and further in view of Eban et al. (US 20190266513) (“Eban”). Regarding claim 1, Teig discloses “[a] method for training and generating a fixed logic gate neural network, comprising: receiving, at a computing system, a training data set of input vectors and corresponding target output values (during the training of the machine-trained (MT) network, the constant parameters of the activation functions of the network may be adjusted by having the network process input data sets [input vectors] with known output data sets [target output values] – Teig, col. 1, ll. 47-63); instantiating, in a memory of the computing system, an untrained logic gate network with a plurality of nodes, wherein each node is parameterized by a set of differentiable parameters corresponding to a predefined finite set of potential logic gate operators (activation function of the MT network is a non-monotonic function that can be adjusted during machine training to emulate different Boolean logical operators [logic gate operators, of which there is a finite set]; during the training of the MT network, the constant parameters [set of parameters corresponding to the operators] of the activation functions of the network may be adjusted [suggesting that the network was originally an untrained network] – Teig, col. 1, ll. 47-63; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64); iteratively training the logic gate network via a plurality of training iterations (after completing error propagation, a solution selector determines whether it should stop training the MT network; when the solution selector determines that it should continue training, the process selects another training solution and iterates multiple times through the training solution – Teig, col. 14, ll. 35-55), each training iteration including: forward-propagating a batch of the input vectors through the logic gate network to generate a training network output by, for each node, computing [an] … output that is a function of the outputs of the potential logic gate operators of the respective node according to current differentiable parameters thereof (MT network processes input values to produce a set of output values; the processing [forward-propagating] entails each processing node of the MT network having its linear operator compute a weighted sum of its input, and then having its nonlinear activation operator [potential logic gate operators] compute a function based on the output of the linear component – Teig, col. 13, ll. 27-39 [output of particular layer = output of potential logic gate operators; output of entire network = training network output that is a function thereof]; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64; cup function allows the processing nodes of the MT network to emulate different logical operators; when the cup function is adjusted into a first form for one set of processing nodes, each such node emulates an AND operator; when the cup function is adjusted into a second form for another set of processing nodes, each such node can emulate an XNOR operator [i.e., the activation functions function as logic gate operators] – Teig, col. 7, l. 62-col. 8, l. 2); computing a loss value that quantifies a difference between the training network output and the corresponding target output values (error calculator computes a set of error [loss] values from (1) the output value set produced by the MT network for the supplied input value set [training network output], and (2) the output value set from the selected training input/output solution [target output values] – Teig, col. 13, l. 56-col. 14, l. 7); determining, via a training optimization algorithm, updated differentiable parameters for at least one node (after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust [update] their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process [optimization algorithm] (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network – Teig, col. 14, ll. 35-55; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64); and applying the updated differentiable parameters to at least one node (after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust [update] their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process [optimization algorithm] (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network; process ends when the solution selector determines that it does not need to continue the training [i.e., the updated parameters are applied to the nodes at that time] – Teig, col. 14, ll. 35-55); selecting, …, for each of at least some of the plurality of nodes, a single logic gate operator from the predefined finite set of potential logic gate operators based on the differentiable parameters of the respective node (after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust [update] their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network; process ends when the solution selector determines that it does not need to continue the training [i.e., when training is complete, each activation function of each node is fixed as emulating a single logical operator] – Teig, col. 14, ll. 35-55; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64); and generating, based on the selecting, a physical fixed logic gate network based on the selected single logic gate operators for at least some of the plurality of nodes (cup function allows the processing nodes of the MT network to emulate different logical operators; when the cup function is adjusted into a first form for one set of processing nodes, each such node emulates an AND operator; when the cup function is adjusted into a second form for another set of processing nodes, each such node can emulate an XNOR operator – Teig, col. 7, l. 62-col. 8, l. 2; after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network; process ends when the solution selector determines that it does not need to continue the training [i.e., when training is complete, each activation function of each node is fixed as emulating a single logical operator and collectively they comprise a fixed logic gate network] – id. at col. 14, ll. 35-55; see also col. 16, ll. 22-37 (disclosing a processor/physical circuitry on which the method is performed)).” Teig appears not to disclose explicitly the further limitations of the claim. However, Kajino discloses “computing a differentiable output (differentiable SNN differs from a conventional SNN in that the outputs are differentiable with respect to model parameters – Kajino, paragraph 60) ….” Kajino and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig to compute a differentiable output of the network, as disclosed by Kajino, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to be trained via gradient descent, which would not be possible with non-differentiable outputs. See Kajino, paragraph 60. Neither Teig nor Kajino appears to disclose explicitly the further limitations of the claim. However, Eban discloses “selecting, after completion of the plurality of training iterations, … [an] operator (decision threshold of machine-learned classification model is often adjusted after training to select a particular operating point on a precision-recall or ROC curve [adjustment of decision threshold = selected operator] – Eban, paragraph 4) ….” Eban and the instant application both relate to machine learning and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig and Kajino to select an operator after training, as disclosed by Eban, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the flexibility of the system to manipulate the network even after training is complete. See Eban, paragraph 4. Regarding claim 2, Teig, as modified by Kajino/Eban, discloses that “each training iteration further comprises: aggregating groups of outputs through summation to obtain scores for use in computing the loss values (output error in output node 6 is used to derive the errors in the output of the fourth and fifth nodes during a backpropagation operation; the error [score for computing loss values] is derived as a weighted sum of the errors [aggregation of groups of outputs through summation] in the outputs of fourth and fifth nodes to which the output of node 1 is supplied – Teig, col. 15, ll. 9-21).” Regarding claim 5, Teig, as modified by Kajino/Eban, discloses that “the set of potential logic gate operators includes at least one of AND logic gate operators, OR logic gate operators, NAND logic gate operators, NOR logic gate operators, and XOR logic gate operators (cup function allows the processing nodes of the MT network to emulate different logical operators; when the cup is adjusted into a first form for one set of processing nodes, each such node emulates an AND operator – Teig, col. 7, l. 62-col. 8, l. 2).” Regarding claim 6, Teig, as modified by Kajino/Eban, discloses that “the set of potential logic gate operators includes at least one of constant TRUE logic gate operators, constant FALSE logic gate operators, and inverter logic gate operators (to emulate Boolean operators, values below a particular value in a range of values may be treated as a 0 (false) while values above the particular value may be treated as a 1 (true) [i.e., in the event that this threshold is set to negative or positive infinity, this reduces either to constant true or constant false] – Teig., col. 6, ll. 39-52).” Regarding claim 8, Teig, as modified by Kajino/Eban, discloses that “the training optimization algorithm determines the updated differentiable parameters by: calculating gradients of the loss value with respect to the differentiable parameters of the nodes (MT network backpropagates errors [loss values] in the network-generated output data sets through the network; backpropagation process utilizes the chain rule iteratively to compute gradients for each layer; the backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable [i.e., the parameters are differentiable] – Teig, col. 12, ll. 48-64); and using an optimization algorithm based on gradient-descent to determine the updated differentiable parameters (after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust [update] their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process [optimization algorithm] (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network – Teig, col. 14, ll. 35-55).” Regarding claim 9, Teig, as modified by Kajino/Eban, discloses that “the gradient-descent optimization computes the gradients by performing backpropagation through the logic gate network (after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network [i.e., by computing gradients] – Teig, col. 14, ll. 35-55).” Regarding claim 14, Teig, as modified by Kajino/Eban, discloses “storing, in a non-transitory computer-readable medium, data defining the fixed logic gate network for subsequent use with other input vectors (features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer-readable storage medium [non-transitory computer-readable medium] – Teig, col. 16, ll. 22-37; MT networks use processing nodes with activation functions that allow the MT to define a complex mathematical expression that solves a particular problem [i.e., stored for use with subsequent input vectors] – id. at col. 4, ll. 49-63).” Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Kajino and Eban and further in view of Engeler (US 5167008) (“Engeler”). Regarding claim 3, neither Teig, Eban, nor Kajino appears to disclose explicitly the further limitations of the claim. However, Engeler discloses that “training the logic gate network further comprises training two or more directly successive layers of nodes positioned directly before a summation of bits (neural network comprises a plurality of neural net layers identified by respective consecutive ordinal numbers [i.e., the network contains two or more successive layers]; network contains capacitive weighting networks that generate analog electric signals descriptive of a weighted summation of each bit-slice of its digital synapse input signals; network contains a training apparatus for re-writing the binary codes stored in word storage elements of the memories of the neural network layers [i.e., the bit addition occurs in each successive layer as part of training, which comes immediately after two layers in the case where there are two or more layers] – Engeler, claim 2).” Engeler and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig, Eban, and Kajino to train successive layers of the networks using bit summation, as disclosed by Engeler, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for the summation to be performed digitally, thereby avoiding the need for separate electrical components for performing analog addition. See Engeler, col. 4, l. 41-col. 5, l. 4. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Kajino and Eban and further in view of Tabaza et al., “Hysteresis Modeling of impact Dynamics Using Artificial Neural Network,” in 37 J. Mechanics 333-38 (2021) (“Tabaza”). Regarding claim 4, neither Teig, Eban, nor Kajino appears to disclose explicitly the further limitations of the claim. However, Tabaza discloses that “each node in the untrained logic gate network receives two input signals and produces a single output signal (Tabaza Fig. 1 shows a representation of a neural network in which each input is connected to one downstream node and the downstream node has two inputs [two input signals] and produces an output [single output signal]).” Tabaza and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Kajino/Eban to employ a network each of whose nodes has two inputs and one output, as disclosed by Tabaza, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would save processing power by increasing the simplicity of the model relative to traditional, fully-connected models. See Tabaza, p. 334. Claims 7 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Kajino and Eban and further in view of Sneha, The 16 Boolean Logic Functions of Two-Input Systems, https://www.allaboutcircuits.com/technical-articles/16-boolean-logic-functions-of-2-input-system/ (2020) (“Sneha”). Regarding claim 7, neither Teig, Eban, nor Kajino appears to disclose explicitly the further limitations of the claim. However, Sneha discloses that “the set of potential logic gate operators includes direct connections (Sneha Table 2 gives a complete list of Boolean logic functions of one-two inputs, including A and B [direct connections]).” Sneha and the instant application both relate to Boolean logic and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Kajino/Eban to include direct connections among the logical functions modeled, as disclosed by Sneha, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the robustness of the system by ensuring that it is capable of modeling Boolean functions of a single input as well as two inputs. See Sneha, paragraph before Table 1. Regarding claim 10, neither Teig, Eban, nor Kajino appears to disclose explicitly the further limitations of the claim. However, Sneha discloses that “the set of potential logic gate operators includes at least one direct connection (Sneha Table 2 gives a complete list of Boolean logic functions of one-two inputs, including A and B [direct connections]).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Kajino/Eban to include direct connections among the logical functions modeled, as disclosed by Sneha, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the robustness of the system by ensuring that it is capable of modeling Boolean functions of a single input as well as two inputs. See Sneha, paragraph before Table 1. Regarding claim 11, Teig, as modified by Kajino, Eban, and Sneha, discloses that “the set of potential logic gate operators includes at least one of AND logic gate operators, OR logic gate operators, NAND logic gate operators, NOR logic gate operators, and XOR logic gate operators (cup function allows the processing nodes of the MT network to emulate different logical operators; when the cup is adjusted into a first form for one set of processing nodes, each such node emulates an AND operator – Teig, col. 7, l. 62-col. 8, l. 2).” Regarding claim 12, Teig, as modified by Kajino, Eban, and Sneha, discloses that “the set of potential logic gate operators includes at least one of constant TRUE logic gate operators, constant FALSE logic gate operators, and inverter logic gate operators (to emulate Boolean operators, values below a particular value in a range of values may be treated as a 0 (false) while values above the particular value may be treated as a 1 (true) [i.e., in the event that this threshold is set to negative or positive infinity, this reduces either to constant true or constant false] – Teig., col. 6, ll. 39-52).” Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Kajino and Eban and further in view of Dawn et al. (US 11151457) (“Dawn”). Regarding claim 13, the rejection of claim 1 is incorporated. Teig further discloses “a logical expression or logical sub-expressions of the fixed logic gate network”, as disclosed above in the rejection of claim 1. Neither Teig, Eban, nor Kajino appears to disclose explicitly the further limitations of the claim. However, Dawen discloses “simplifying a logical expression (iterative approach helps to algorithmically simplify the logical expression of predictor rules – Dawn, col. 28, l. 37-col. 29, l. 6) ….” Dawn and the instant application both relate to the simplification of logical expressions and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig, Eban, and Kajino to use the system to simplify logical expressions, as disclosed by Dawn, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would reduce processor usage by ensuring that the logical forms processed thereby are as simple as possible. See Dawn, col. 28, l. 37-col. 29, l. 6. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Kajino and Eban and further in view of Ahmad (US 20190044535) (“Ahmad”). Regarding claim 15, the rejection of claim 1 is incorporated. Teig further discloses “implementing a logical expression of the fixed logic gate network”, as shown above in the rejection of claim 1. Neither Teig, Eban, nor Kajino appears to disclose explicitly the further limitations of the claim. However, Ahmad discloses “implementing a … network in a field-programmable gate array (FPGA) for subsequent use with other input vectors (to implement a trained deep neural network [i.e., for use with other input vectors], an FPGA may be configured according to a deep neural network topology – Ahmad, paragraph 28).” Ahmad and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Kajino/Eban to perform the method on an FPGA, as disclosed by Ahmad, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the functions to be performed more efficiently than would be possible on another processor. See Ahmad, paragraph 28. Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Kajino and Eban and further in view of Chakraborty et al. (US 11947590) (“Chakraborty”). Regarding claim 16, the rejection of claim 1 is incorporated. Teig further discloses “implementing a logical expression of the fixed logic gate network,” as shown above in the rejection of claim 1. Teig/Kajino/Eban appears not to disclose explicitly the further limitations of the claim. However, Chakraborty discloses “implementing a … network in an application-specific integrated circuit (ASIC) for subsequent use with other input vectors (ML model used by services may be hosted on specialized hardware, for example, servers that use TPU ASICs – Chakraborty, col. 10, ll. 7-21) ….” Chakraborty and the instant application both relate to machine learning and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Kajino/Eban to use a TPU ASIC to perform the method, as disclosed by Chakraborty, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the network calculations to be accelerated while simultaneously using a floating-point format that allows for greater accuracy than integer numerics. See Chakraborty, col. 7, ll. 43-56. Regarding claim 17, the rejection of claim 1 is incorporated. Teig further discloses “implementing a logical expression of the fixed logic gate network,” as shown above in the rejection of claim 1. Teig/Kajino/Eban appears not to disclose explicitly the further limitations of the claim. However, Chakraborty discloses “implementing the … network in an application-specific integrated circuit (ASIC) customized to function as a tensor processing unit (TPU) (ML model used by services may be hosted on specialized hardware, for example, servers that use TPU ASICs – Chakraborty, col. 10, ll. 7-21) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Kajino/Eban to use a TPU ASIC to perform the method, as disclosed by Chakraborty, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the network calculations to be accelerated while simultaneously using a floating-point format that allows for greater accuracy than integer numerics. See Chakraborty, col. 7, ll. 43-56. Claims 18 and 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Chakraborty and further in view of Eban. Regarding claim 18, Teig discloses “[a] … circuit … with a logic gate neural network, comprising: physical logic circuitry … implementing a logical expression of a fixed logic gate network determined by training a differentiable logic gate neural network (cup function allows the processing nodes of the MT network to emulate different logical operators; when the cup function is adjusted into a first form for one set of processing nodes, each such node emulates an AND operator; when the cup function is adjusted into a second form for another set of processing nodes, each such node can emulate an XNOR operator [i.e., the network is a logic gate neural network] – Teig, col. 7, l. 62-col. 8, l. 2; after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network; process ends when the solution selector determines that it does not need to continue the training [i.e., when training is complete, each activation function of each node is fixed as emulating a single logical operator and the resulting network is a fixed logic gate network that implements a logical expression] – id. at col. 14, ll. 35-55; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64; see also col. 16, ll. 22-37 (disclosing a processor/physical circuitry on which the method is performed)), wherein, during training of the differentiable logic gate neural network, (i) each node of the differentiable logic gate network is parameterized by a set of differentiable parameters corresponding to a respective predefined finite set of potential logic gate operators (activation function of the MT network is a non-monotonic function that can be adjusted during machine training to emulate different Boolean logical operators [logic gate operators, of which there is a predefined finite set]; during the training of the MT network, the constant parameters [set of parameters corresponding to the operators] of the activation functions of the network may be adjusted – Teig, col. 1, ll. 47-63; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64), (ii) each node receives inputs, and (iii) each node produces an output (Teig Fig. 1, ref. char. 100 shows the instant network with multiple nodes N each of which receives inputs and produces an output), and wherein … the training selects, for each of at least some of the respective nodes, a single logic gate operator from the predefined finite set of potential logic gate operators based on updated differentiable parameters determined during training (after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust [update] their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network; process ends when the solution selector determines that it does not need to continue the training [i.e., when training is complete, each activation function of each node is fixed as emulating a single logical operator] – Teig, col. 14, ll. 35-55; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64), and wherein the fixed logic gate network is generated based on the selected single logic gate operators, such that the physical logic gate circuitry implements the selected single logic gate operator for the respective nodes (after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust [update] their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network; process ends when the solution selector determines that it does not need to continue the training [i.e., when training is complete, each activation function of each node is fixed as emulating a single logical operator, and these fixed nodes collectively represent a fixed logic gate network] – Teig, col. 14, ll. 35-55; see also col. 16, ll. 22-37 (disclosing a processor/physical circuitry on which the method is performed)).” Teig appears not to disclose explicitly the further limitations of the claim. However, Chakraborty discloses “[a]n application-specific integrated circuit (ASIC) … comprising: physical logic circuitry fixed in silicon (ML model used by services may be hosted on specialized hardware, for example, servers that use TPU ASICs [note that ASICs are fixed in silicon for a particular purpose] – Chakraborty, col. 10, ll. 7-21) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig to use a TPU ASIC to perform the method, as disclosed by Chakraborty, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the network calculations to be accelerated while simultaneously using a floating-point format that allows for greater accuracy than integer numerics. See Chakraborty, col. 7, ll. 43-56. Neither Teig nor Chakraborty appears to disclose explicitly the further limitations of the claim. However, Eban discloses “after completion of the training of the … neural network, … select[ing] … [an] operator (decision threshold of machine-learned classification model is often adjusted after training to select a particular operating point on a precision-recall or ROC curve [adjustment of decision threshold = selected operator] – Eban, paragraph 4) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig and Chakraborty to select an operator after training, as disclosed by Eban, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the flexibility of the system to manipulate the network even after training is complete. See Eban, paragraph 4. Regarding claim 22, Teig, as modified by Chakraborty/Eban, discloses that “the ASIC is customized to function as a tensor processing unit (TPU) (ML model used by services may be hosted on specialized hardware, for example, servers that use TPU ASICs – Chakraborty, col. 10, ll. 7-21) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Eban to use a TPU ASIC to perform the method, as disclosed by Chakraborty, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the network calculations to be accelerated while simultaneously using a floating-point format that allows for greater accuracy than integer numerics. See Chakraborty, col. 7, ll. 43-56. Regarding claim 23, Teig, as modified by Chakraborty/Eban, discloses that “the set of logic gate operators includes one or more of AND logic gate operators, OR logic gate operators, NAND logic gate operators, NOR logic gate operators, and XOR logic gate operators (cup function allows the processing nodes of the MT network to emulate different logical operators; when the cup is adjusted into a first form for one set of processing nodes, each such node emulates an AND operator – Teig, col. 7, l. 62-col. 8, l. 2).” Regarding claim 24, Teig, as modified by Chakraborty/Eban, discloses that “the set of potential logic gate operators includes at least one of constant TRUE logic gate operators, constant FALSE logic gate operators, and inverter logic gate operators (to emulate Boolean operators, values below a particular value in a range of values may be treated as a 0 (false) while values above the particular value may be treated as a 1 (true) [i.e., in the event that this threshold is set to negative or positive infinity, this reduces either to constant true or constant false] – Teig., col. 6, ll. 39-52).” Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Chakraborty and Eban and further in view of Kadkol et al. (US 11106430) (“Kadkol”). Regarding claim 19, neither Teig, Eban, nor Chakraborty appears to disclose explicitly the further limitations of the claim. However, Kadkol discloses that “the logic circuitry includes bit adders that compute integer output scores (base index and one or more significant bits extracted are each 8 bits long, and the adder circuit is an 8-bit integer adder [i.e., its outputs are integers] – Kadkol, col. 9, l. 60-col. 10, l. 2).” Kadkol and the instant application both relate to neural networks and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig, Eban, and Chakraborty to use bit adders that compute integer outputs, as disclosed by Kadkol, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for efficient calculation of nonlinear activation functions of floating-point numbers. See Kadkol, col. 1, ll. 36-41. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Chakraborty and Eban and further in view of Dawn. Regarding claim 20, the rejection of claim 18 is incorporated. Teig further discloses “the logical expression of the fixed logic gate network,” as shown above in the rejection of claim 18. Neither Teig, Eban, nor Chakraborty appears to disclose explicitly the further limitations of the claim. However, Dawn discloses “a simplified logical expression (iterative approach helps to algorithmically simplify the logical expression of predictor rules – Dawn, col. 28, l. 37-col. 29, l. 6) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig, Eban, and Chakraborty to use the system to simplify logical expressions, as disclosed by Dawn, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would reduce processor usage by ensuring that the logical forms processed thereby are as simple as possible. See Dawn, col. 28, l. 37-col. 29, l. 6. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Chakraborty and Eban and further in view of Guilley et al. (EP 3506548) (“Guilley”). Regarding claim 21, the rejection of claim 18 is incorporated. Teig further discloses that “the logic circuitry implements the logical expression of the fixed logic gate network”, as shown above in the rejection of claim 18. Neither Teig, Eban, nor Chakraborty appears to disclose explicitly the further limitations of the claim. However, Guilley discloses “implement[ing] the logical expression … using fixed combinational standard-cells (input memory block and second path may comprise combinatorial standard cells, which may be configured to determine the value to be stored in the output memory elements; combinatorial standard cells include memoryless logic gates that implement Boolean functions [logical expressions] – Guilley, paragraph 68).” Guilley and the instant application both relate to combinatorial standard cells and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig, Eban, and Chakraborty to implement the logical expression using combinatorial standard cells, as disclosed by Guilley, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow one designer to focus on the high-level logical aspect of the digital design while allowing another designer to focus on the implementation. See Guilley, paragraph 68. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Chakraborty and Eban and further in view of Sneha. Regarding claim 25, neither Teig, Eban, nor Chakraborty appears to disclose explicitly the further limitations of the claim. However, Sneha discloses that “the set of potential logic gate operators includes direct connections (Sneha Table 2 gives a complete list of Boolean logic functions of one-two inputs, including A and B [direct connections]).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Chakraborty/Eban to include direct connections among the logical functions modeled, as disclosed by Sneha, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the robustness of the system by ensuring that it is capable of modeling Boolean functions of a single input as well as two inputs. See Sneha, paragraph before Table 1. Claims 26 and 18-29 are rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Ahmad and further in view of Eban. Regarding claim 26, Teig discloses “[a] neural network device comprising: … configuration memory storing configuration data (Teig col. 16, ll. 22-37 disclose memories storing data) to … implement a logical expression of a fixed logic gate network determined by training a differentiable logic gate neural network (cup function allows the processing nodes of the MT network to emulate different logical operators; when the cup function is adjusted into a first form for one set of processing nodes, each such node emulates an AND operator; when the cup function is adjusted into a second form for another set of processing nodes, each such node can emulate an XNOR operator [i.e., the network is a logic gate neural network] – Teig, col. 7, l. 62-col. 8, l. 2; after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network; process ends when the solution selector determines that it does not need to continue the training [i.e., when training is complete, each activation function of each node is fixed as emulating a single logical operator and the resulting network expresses a logical expression] – id. at col. 14, ll. 35-55; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64), wherein, during training of the differentiable logic gate neural network, (i) each node of the differentiable logic gate network is parameterized by a set of differentiable parameters corresponding to a respective predefined finite set of potential logic gate operators (activation function of the MT network is a non-monotonic function that can be adjusted during machine training to emulate different Boolean logical operators [logic gate operators, of which there is a predefined finite set]; during the training of the MT network, the constant parameters [set of parameters corresponding to the operators] of the activation functions of the network may be adjusted – Teig, col. 1, ll. 47-63; backpropagation process requires that the activation functions of the nonlinear operators of the processing nodes be differentiable – id. at col. 12, ll. 48-64), (ii) each node receives inputs, and (iii) each node produces an output (Teig Fig. 1, ref. char. 100 shows the instant network with multiple nodes N each of which receives inputs and produces an output), and wherein … the training selects, for each of at least some respective nodes, a single logic gate operator from the respective predefined finite set of potential logic gate operators, wherein … configuration data [are] based on the selected single logic gate operators to program the [hardware] to implement the fixed logic gate network (after the computed error value is back propagated through the processing nodes of the MT network and the nodes adjust [update] their linear and/or nonlinear operator parameters during the backpropagation, a solution selector uses a minimization process (e.g., a stochastic gradient descent minimizer) to determine when it should stop training of the MT network; process ends when the solution selector determines that it does not need to continue the training [i.e., when training is complete, each activation function of each node is fixed as emulating a single logical operator] – Teig, col. 14, ll. 35-55; see also col. 16, ll. 22-37 (disclosing a processor/hardware on which the method is performed)).” Teig appears not to disclose explicitly the further limitations of the claim. However, Ahmad discloses “a field-programmable gate array (FPGA) that includes a programmable logic fabric (integrated circuit device may be a programmable integrated circuit, such as an FPGA that includes a programmable logic fabric of programmable logic units – Ahmad, paragraph 26); and configuration memory storing configuration data to program the programmable logic fabric of the FPGA (host processors may communicate with memory, which may hold data to be processed by the data processing system – Ahmad, paragraph 60; see also paragraph 30 (describing communication between the memory and the FPGA)) …; … wherein the … data … program the programmable logic fabric to implement the … network (designer may program the integrated circuit device to implement a trained DNN [network]; integrated circuit device may be a programmable integrated circuit, such as an FPGA, that includes a programmable logic fabric of programmable logic units – Ahmad, paragraph 26).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig to perform the method on an FPGA, as disclosed by Ahmad, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the functions to be performed more efficiently than would be possible on another processor. See Ahmad, paragraph 28. Neither Teig nor Ahmad appears to disclose explicitly the further limitations of the claim. However, Eban discloses “after completion of the training of the … neural network, … select[ing] … [an] operator (decision threshold of machine-learned classification model is often adjusted after training to select a particular operating point on a precision-recall or ROC curve [adjustment of decision threshold = selected operator] – Eban, paragraph 4) ….” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig and Ahmad to select an operator after training, as disclosed by Eban, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the flexibility of the system to manipulate the network even after training is complete. See Eban, paragraph 4. Regarding claim 28, Teig, as modified by Ahmad/Eban, discloses that “the set of logic gate operators includes one or more of AND logic gate operators, OR logic gate operators, NAND logic gate operators, NOR logic gate operators, and XOR logic gate operators (cup function allows the processing nodes of the MT network to emulate different logical operators; when the cup is adjusted into a first form for one set of processing nodes, each such node emulates an AND operator – Teig, col. 7, l. 62-col. 8, l. 2).” Regarding claim 29, Teig, as modified by Ahmad/Eban, discloses that “the set of potential logic gate operators includes at least one of constant TRUE logic gate operators, constant FALSE logic gate operators, and inverter logic gate operators (to emulate Boolean operators, values below a particular value in a range of values may be treated as a 0 (false) while values above the particular value may be treated as a 1 (true) [i.e., in the event that this threshold is set to negative or positive infinity, this reduces either to constant true or constant false] – Teig., col. 6, ll. 39-52).” Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Ahmad and Eban and further in view of Kadkol. Regarding claim 27, neither Teig, Eban, nor Ahmad appears to disclose explicitly the further limitations of the claim. However, Kadkol discloses that “the programmable logic fabric is configured to implement bit adders that compute integer output scores (base index and one or more significant bits extracted are each 8 bits long, and the adder circuit is an 8-bit integer adder [i.e., its outputs are integers] – Kadkol, col. 9, l. 60-col. 10, l. 2).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Teig, Eban, and Ahmad to use bit adders that compute integer outputs, as disclosed by Kadkol, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for efficient calculation of nonlinear activation functions of floating-point numbers. See Kadkol, col. 1, ll. 36-41. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over Teig in view of Ahmad and Eban and further in view of Sneha. Regarding claim 30, neither Teig, Eban, nor Ahmad appears to disclose explicitly the further limitations of the claim. However, Sneha discloses that “the set of potential logic gate operators includes direct connections (Sneha Table 2 gives a complete list of Boolean logic functions of one-two inputs, including A and B [direct connections]).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Teig/Ahmad/Eban to include direct connections among the logical functions modeled, as disclosed by Sneha, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would increase the robustness of the system by ensuring that it is capable of modeling Boolean functions of a single input as well as two inputs. See Sneha, paragraph before Table 1. Response to Arguments Applicant's arguments filed February 6, 2026 (“Remarks”) have been fully considered but they are, except insofar as rendered moot by the entry of a new ground of rejection, not persuasive. Regarding the eligibility rejection, Applicant argues that the claims as amended are eligible because (a) the training of the network is not practically mentally performable and does not represent a mathematical concept; (b) that any judicial exception is integrated into a practical application because it is allegedly directed to the production of fixed logic gate networks whose logical expressions are fixed in ASIC hardware; and that (c) per-node parameterization, post-training operator selection, and generalization of a fixed gate-level network based on that selection is well-understood, routine, and conventional. Remarks at 18-19. However, regarding (a), only the selection of the operator was deemed a mental process by the analysis, and the training steps are mathematical concepts by analogy to claim 2 of Example 47. Regarding (b), as noted in the rejection itself, the only additional elements of the claim either recite insignificant extra-solution activity that is well-understood, routine, and conventional or mere recitations of the field of use of the judicial exception. The core inventive concept of training a fixed logic-gate network (which, as noted below, is not defined by the claims) is part of the abstract idea itself and cannot provide the inventive concept. MPEP § 2106.05(I). Regarding (c), Examiner is not required to show that these elements are well-understood, routine, and conventional because he did not analyze any of these limitations as insignificant extra-solution activity at step 2A, prong 2. Regarding the art rejection, Applicant argues that Teig allegedly fails to disclose the claims as amended because it allegedly does not disclose differentiable parameters that correspond to logic gate operators, computing a node output as a differentiable function of the logic operators’ outputs, and selecting a single operator per node from a finite set and generating a fixed logic gate network by replacing nodes with logic gates. Remarks at 19-20. However, some of the features that Applicant argues that Teig does not disclosed are not claimed. For example, claim 1 does not require replacing nodes with logic gates, but “generating … a physical fixed logic gate network based on the selected single logic gate operators”. Applicant does not define “physical fixed logic gate network,” and the term may be construed to encompass a network that executes on physical hardware whose nodes operate as logic gates that are fixed after some determined point, which is what Teig describes. Teig does select single operators from a finite set because there is necessarily only a finite set of Boolean operators operating on two inputs. Since Teig discloses that the activation functions that characterize the Boolean operators are differentiable, it follows that the parameters that characterize that activation function are differentiable parameters that correspond to the logical operator that each activation function emulates. While Teig does not disclose selecting operators after training, the argument that Teig does not disclose this element is moot by virtue of the use of newly cited reference Eban to teach this limitation. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN C VAUGHN whose telephone number is (571)272-4849. The examiner can normally be reached M-R 7:00a-5:00p ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN C VAUGHN/ Primary Examiner, Art Unit 2125 1 Note that Applicant is not defining “physical” here. While Applicant may intend to say that the resulting network is hard-coded by physically manipulating and fixing the transistors of the hardware on which it runs, the term “physical fixed logic-gate network” may be more broadly construed to mean a network with a fixed physical instantiation, which would include standard fully trained software networks because software networks are “physical” in the sense that they are encoded in a physical memory.
Read full office action

Prosecution Timeline

Aug 15, 2025
Application Filed
Nov 05, 2025
Non-Final Rejection — §101, §103
Dec 28, 2025
Interview Requested
Jan 13, 2026
Examiner Interview Summary
Jan 13, 2026
Applicant Interview (Telephonic)
Feb 06, 2026
Response Filed
Feb 26, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602448
PROGRESSIVE NEURAL ORDINARY DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12602610
CLASSIFICATION BASED ON IMBALANCED DATASET
2y 5m to grant Granted Apr 14, 2026
Patent 12561583
Systems and Methods for Machine Learning in Hyperbolic Space
2y 5m to grant Granted Feb 24, 2026
Patent 12541703
MULTITASKING SCHEME FOR QUANTUM COMPUTERS
2y 5m to grant Granted Feb 03, 2026
Patent 12511526
METHOD FOR PREDICTING A MOLECULAR STRUCTURE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
81%
With Interview (+19.4%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month