Prosecution Insights
Last updated: April 19, 2026
Application No. 17/077,759

SYMBOLIC VALIDATION OF NEUROMORPHIC HARDWARE

Non-Final OA §101§103
Filed
Oct 22, 2020
Examiner
GERMICK, JOHNATHAN R
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
47%
Grant Probability
Moderate
5-6
OA Rounds
4y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
43 granted / 91 resolved
-7.7% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
28 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 91 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to communications filed on 10/31/2025. Claims 1, 3-7, 9-10, 12-16, 18-20 are pending in the case. Claims 1, 10 and 19 are independent claims. Claims 1, 7, 10, 16, 19 and 20 are amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/31/2025 has been entered. Applicant's arguments filed 10/31/2025 have been fully considered but they are not persuasive. With respect to claim rejections under 35 U.S.C. 101: Applicant argues that the claim recites computer-based operations. Examiner notes that a claim which recites computer-based operations is not on its own a reason for eligibility nor a reason to suggest the claim does not recite abstract ideas or integrate into a practical application or significantly more. The MPEP is replete with additional elements which invoke computer-based operation and are not eligible. Applicant notes the features go beyond any recited abstract idea because it is rooted in compiler operations to enhance neural networks validation and inference, thus the claimed methods provide a technological improvement. Applicant highlights that validation of a neural network implementation at compile-time cannot be done mentally. Examiner disagrees. Validation merely involves verification of a valid configuration of neural network. This includes an inspection of abstract data representing the configuration of a neural network. Such data can be validated mentally, even during an unspecified “compile time”. For example, a validator could inspect the dimensions of matrix multiplication at compile time to ensure they are valid. Reciting the time of the abstract idea does not indicate that the activity itself is not an abstract idea. Merely improving a technology by using a known technology (neural networks) with an abstract idea does not describe an improvement to the functioning of the technology itself but rather an improvement to the abstract idea. Applicant argues the claims are similar to Enfish because they improve the way computers operate. Examiner disagrees. The claims are not like those in Enfish. Enfish describes particular non-abstract ideas describing the functioning of a computer (claims which describe how memory technology is configured and used in a specific manner to improve data retrieval). Those computer confined processes are what result in the improvement to technology. In contrast, the instant claims do not recite details describing the technologically confined steps which affect the improvement. Instead, the claims recite validation (an abstract idea), which, when applied to the computer-based technology result in an improved neural network. This improvement however is the result of the abstract idea alone, and not an improvement to the functioning of the recited technology. As a concrete example, the decision to compute 5*5 instead of 5+5+5+5+5 on a calculator does not reflect an improvement to the functioning of the calculator, despite the multiplication computation causing the calculator to function faster or with less energy. This is because merely performing an improved abstract idea on a computer technology does not describe any improvement to functioning of the technology. Importantly, a claim which recited representing the addition computation as a multiplication representation so that the computation can be performed on a multiplexer circuit particularly tuned for multiplication rather than addition would recite the specific technological functioning which enable the improvement. Such functioning is not described in the claims. Applicant argues the claims recite a specific technological implementation and as such integrate the alleged abstract idea. Examiner disagrees. The claims merely describe inferring (a mental step) using a named processing unit comprising certain components. At best this generally links the abstract idea to a particular technology, the amendments do not describe how the IPU or parallel execution is performed such that the integrate the abstract idea into a practical application. Finally, Applicant argues that claims rooted in technology which overcome a problem are patent eligible, noting that the instant claims amount to significantly more because the claims describe a validation process executed in conjunction with named hardware. Examiner disagrees. The claims are not rooted in technology at least because they include only additional elements which the MPEP describes as not integrating into a practical application nor amounting to significantly more. Merely reciting a validation process (an abstract idea) in conjunction with named hardware does not make a claim eligible. This is clear because MPEP 2106.05(f) and MPEP 2106.05(h) particularly point out computer related additional elements which do not make a claim eligible. These are identified and explained in the updated rejection. Therefore, the rejection is maintained. With respect to claim rejections under prior art: Applicant argues that Baudart does not describe reading, constructing and validating as claimed. Examiner disagrees. The neural network schema described in the art is read from an existing pipeline, constructed according to schema (i.e an architecture specific context free grammar), and validated with truth constraints which correspond to the claimed comparison with ground truth independent of input data. Applicant argues that validation is inherently tied to both data and configuration as it evaluates whether specific hyperparameter values comply with schema constraints. Examiner disagrees. Applicant appears to conflate “data” with the claimed “input data”. The rejection suggests that validating configurations and/or hyperparameters is performed independent of “input data” such as training data applied to a neural network. While the art describes hyperparameter validation this is not “inherently tied” to the claimed input data. The art makes clear that the pipeline optimization is optimizing the hyperparameters and configurations. The “individual operators” are functions which represents the transformations according to hyperparameters, training data and test data. It is the operators themselves which are validated. This is not “dependent” on data any more than a computer processor is dependent on any hypothetical data which it operates on. As noted in the art pg 3 “A pipeline is a directed acyclic graph (DAG) of operators and a pipeline is itself also an operator” the pipelines or operators which are validated are not the input data themselves any more than a neural network architecture is “tied” or dependent on input data. Examiner highlights that the cited art performs the validation based on the features of the operators and not any specific input data. However, even if it were demonstrated that the art performed the validation using specific input data, that validation is independent of any other input data not applied for the claimed validation. The BRI of the claims is that the validating must be independent of at least a portion of input data, where input data is merely a label because the claims have provided nothing to limit what “values” are included in “input data”. Therefore, even if the cited validation were “tied” or dependent on certain “input data” in some identifiable way, it is also independent of other “input data”, thus corresponding to the claims. Applicant continues noting that neither Beran nor Wang resolve the deficiencies and that none of the cited art describe “inferring…using an…IPU…comprising multiple cores…”. Examiner notes any deficiencies have been addressed in the updated rejection in view of Baudart/Beran further in view of Xiao “NeuronLink: An Efficient Chip-to-Chip Interconnect for Large-Scale Neural Network Accelerators” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-7, 9-10, 12-16, 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1/10/19 Under step 1, the claim 1 is directed to a method of validating a neural network, which is directed to a process, one of the statutory categories. Claim 10 is directed to A computer program product for validating an artificial neural network system, which is directed to a product of manufacture, one of the statutory categories. Claim 19 is directed to A system, which is directed to a machine, one of the statutory categories. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations ( “reading a description of an artificial neural network containing no data-dependent branching, wherein the artificial neural network is a deep artificial neural network; based on the description of the artificial neural network, constructing a symbolic representation of an output of the artificial neural network, wherein the symbolic representation is a string of symbols, the symbols are defined by an architecture-specific context free grammar, the symbolic representation comprises a symbol for at least one input activation, and the symbols are independent of values included in input data such that the symbol for the at least one input activation does not reference any particular values of the at least one input activation and validating, at compile time, the artificial neural network by comparing the symbolic representation to a ground truth symbolic representation, wherein validating the artificial neural network is performed independent of the input data….inferring, during run time, the validated artificial neural network...to perform neural network interference). Each of these limitations describes analysis made on data constructing strings which describe a system according to a grammar and validating these strings are all decisions about abstract data, which can be performed in the mind. Therefore, the claim recites an abstract idea. Step 2A Prong Two Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations from claim 10 - “a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method” and from claim 19 “a computing node comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor of the computing node to cause the processor to perform a method… using an inference processing unit (IPU) …the IPU being configured… ” amounts to mere instructions to apply a computer technology to an abstract idea. The recitation of generic named components without any description on how the solution is accomplished. These limitations do not describe how specific interactions are performed, but rather claim the result of applying the abstract idea to named computer components, see MPEP 2106.05(f) consideration. Further, the claim recites “comprising multiple computation cores interconnected by a network-on-chip …through parallel execution across the multiple computation cores and communication through the network-on-chip” which are generally linking the use of the judicial exception to a particular technological environment or field of use because they only limit the claim to a network-on-chip in a parallel environment without reciting any of the specific functions which transform the abstract idea into a practical application, see MPEP 2106.05(h) Therefore , the claim is directed to a judicial exception. Step 2B: Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more under than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 3/12, 4/13, 9/18 The claim is directed to a statutory category. The claims recites more description of the abstract ideas: “wherein the symbolic representation has configurable granularity.”, “wherein the symbolic representation comprises at least one numeric value”, “wherein the symbolic representation is a directed acyclic graph” Under Step 2A Prong 1, these limitations only serve to describe the abstract idea addressed in the independent claim. Each of these limitations describe aspect of the data being evaluated in the abstract idea. The claim does not recite any more additional elements beyond those identified in the parent claim. These additional elements do not integrate the abstract idea into a practical application nor provide significantly more. Regarding Claim 5/14/20, 6/15, 7/16 The claim is directed to a statutory category recited in the corresponding parent claim. The claim recites more abstract ideas: “determining, from a number of operations in the symbolic representation, a number of cycles required for computation of the artificial neural network.”, “evaluating the symbolic representation using input data to determine an output of the artificial neural network”, “comparing the symbolic representation to a ground truth string, thereby validating an output of the artificial neural network system.” Under Step 2A Prong 1, these limitations correspond to a mental process, because it is a decision about data which can be made in the mind. The claim does not recite any more additional elements beyond those identified in the parent claim. These additional elements do not integrate the abstract idea into a practical application nor provide significantly more. Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 6-7, 9-10, 12-13 and 14-16, 18-19 are rejected under 35 U.S.C. § 103 as being unpatentable over Baudart et al. “Lale: Consistent Automated Machine Learning”, further in view of Beran et al. “ViNNSL - The Vienna Neural Network Specification Language”, further in view of Xiao “NeuronLink: An Efficient Chip-to-Chip Interconnect for Large-Scale Neural Network Accelerators” Regarding Claim 1/10/19 Baudart teaches, A method of validating an artificial neural network system, the method comprising: A computer program product for validating an artificial neural network system, the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method… A system comprising: a computing node comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor of the computing node to cause the processor to perform a method comprising… and comparing the symbolic representation to a predetermined symbolic representation, thereby validating the artificial neural network system (pg 1 “This paper introduces Lale, a library of high-level Python interfaces that simplifies and unifies automated machine learning in a consistent way” pg 3 “p4: Check for invalid configurations early and prune them out of search spaces… This section shows Lale’s abstractions for consistent AutoML, addressing the problem statements P1 ∧ P2 ∧ P3 ∧ P4 from Section 2.” Pg 7 “This section highlights some of the trickier parts of the Lale implementation, which is entirely in Python… The Lale implementation adds Python 3 type hints so users can get additional help from tools such as MyPy, PyCharm, or VSCode…. This is demonstrated by Lale’s operators from PyTorch (BERT, ResNet50), Weka (J48), and R (ARulesCBA)” Lale is a system for validation on computers via Python, its use on a computer is demonstrated in Table 1) reading a description of an artificial neural network containing no data- dependent branching, wherein the artificial neural network is a deep artificial neural network; based on the description of the artificial neural network, constructing a symbolic representation of an output of the artificial neural network, (pg 8 “We used the CIFAR-10 computer vision dataset. We picked the ResNet50 deep-learning model, since it has been shown to do well on CIFAR-10. Our experiments kept the architecture of ResNet50 fixed and tuned learning-procedure hyperparameter” Section 6.3 pg 8 “Lale’s search space compiler takes rich hyperparameter schemas including side constraints and translates them into semantically equivalent search spaces for different AutoML tools” pg 4 “Mathematically, Lale views a pipeline as a function of the form PNG media_image1.png 27 371 media_image1.png Greyscale …This uses currying just like individual operators, plus an additional θtopology at the start to capture the steps and edges. A pipeline is trainable if both θtopology and θhyperparams are given” Lale takes a description of a deep neural network without data dependent branching as input then translates or constructs a new representation. The pipeline is a symbolic representation of the output or the neural network.) wherein the symbolic representation is a string of symbols (pg 4 “The combined schema of an operator specifies the valid values along with search guidance for its latent arguments… For didactic purposes, this section discusses only a representative subset. Figure 7 shows the JSON Schema [18] specification of that subset. The open-source Lale library includes JSON schemas for many operators PNG media_image2.png 226 434 media_image2.png Greyscale ” as shown in Figure 7, the schema for the operator within the pipeline contain string of symbols.) the symbols are defined by an architecture-specific context-free grammar the string conforms to the architecture-specific context-free grammar (pg 5 “A higher-order operator is an operator that takes another operator as an argument. Scikit-learn includes several higher-order operators including RFE, AdaBoostClassifier, and BaggingClassifier…. Lale searches both jointly, helping solve problem P3 from Section 2… A pipeline grammar is a context-free grammar that describes a possibly unbounded set of pipeline topologies” the pipeline describes the topology or architecture specific to the neural network using a context free grammar of symbols.) the symbols are independent of values included in input data such that the symbol for the at least one input activation does not reference any particular values of the at least one input activation; (pg 1 “A machine learning pipeline consists of one or more operators that take the input data through a series of transformations to finally generate predictions” pg 4 Figure 7 PNG media_image3.png 506 437 media_image3.png Greyscale none of the symbols reference particular values of the input and as such are understood to be independent of the values claimed. ) and validating, at compile time, the artificial neural network by comparing the symbolic representation to a ground truth symbolic representation, wherein validating the artificial neural network is performed independent of the input data (pg 3 “Check for invalid configurations early and prune them out of search spaces. Even if the search for each hyperparameter uses a valid range in isolation, their combination can violate side constraints… It is possible (with varying levels of difficulty) to incorporate these side constraints with the search space specification schemes… Custom validators would need to be written for each tool” pg 4 “The combined schema of an operator specifies the valid values along with search guidance for its latent arguments. It addresses problem P4 from Section 2, supporting automated search with a pruned search space and early error checking all from the same single source of truth” pg 8 Section 6.3 “Lale’s search space compiler takes rich hyperparameter schemas including side constraints” error checking from a single source of truth amounts to comparing or validating the pipeline/operator discovered to a single source of truth, i.e ground truth. Lale describes a system for compiling with constraints, thus validation happens at compile time. The hyperparameters themselves are constrained this is independent of “the input data”, where the claimed input data is at least a set of input data which is not needed for validation. ) and inferring, during run time, the validated artificial neural network using an inference processing unit (IPU)… the IPU being configured to perform neural network inference (Section 6.2 pg 8 “This section demonstrates Lale’s versatility on three datasets from different modalities. Table 2 summarizes the results” the Lale validation system is used on data sets thus used to infer using a processor corresponding to the IPU claimed.) Baudart does not explicitly teach, the symbolic representation comprises a symbol for at least one input activation… such that the symbol for the at least one input activation does not reference any particular values of the at least one input activation;…[the IPU] comprising multiple computation cores interconnected by a network-on-chip… through parallel execution across the multiple computation cores and communication through the network-on-chip. Beran however when addressing a schema-based definition of a neural network architecture teaches, the symbolic representation comprises a symbol for at least one input activation … such that the symbol for the at least one input activation does not reference any particular values of the at least one input activation; (pg 3 “The definition schema shown in Fig. 3 is user driven and represents an XML based formal specification of a newly created neural network that has to be trained.” Pg 4 “The corresponding definition and data document are presented in Listing 1 and 2” pg 5 PNG media_image4.png 331 487 media_image4.png Greyscale the neural network definition is a symbolic representation of the structure of the neural network and includes strings for the input activation by defining the activation function as a sigmoid. The symbol does not reference any particular values.) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the neural network declaration of Baudart with the neural network specification definition described by Beran. One would have been motivated to make such a combination because Baudart and Beran describe the benefits of universal neural network object specification. Beran notes “Based on the Grid infrastructure N2Grid allows to build up a virtual community enabling arbitrary users to exchange knowledge (neural network resources, such as neural network objects and neural network paradigms) and to exploit the available computing resources for neural network specific tasks, leading to a Grid based, world-wide distributed, neural network knowledge and simulation system” (Beran abstract). Further Baudart notes “A unified syntax would make these tools more consistent, easier to learn, and easier to switch.” (pg 3 Baudart) Baudart/Beran does not explicitly teach, [the IPU] comprising multiple computation cores interconnected by a network-on-chip… through parallel execution across the multiple computation cores and communication through the network-on-chip. Xiao however, when addressing neural network inference teaches, [the IPU] comprising multiple computation cores interconnected by a network-on-chip… and communication through the network-on-chip. (abstract “Large-scale neural network (NN) accelerators typically consist of several processing nodes, which could be implemented as a multi- or many-core chip and organized via a network-on-chip (NoC)… we propose a lightweight and NoC-aware chip-to-chip interconnection scheme, enabling efficient interconnection for NoC-based NN chips. In addition, we evaluate the proposed techniques on a four connected NoC-based deep neural network (DNN) chips with four field programmable gate arrays (FPGAs)” pg 2 “the proposed interchip and intrachip interconnection techniques, we propose a DNN accelerator with four NoC based chips organized in a 2-D mesh” pg 7 “Our architecture is targeted on DNN inference” the above describes multi core multi NOC chips interconnected for Deep neural network inference) through parallel execution across the multiple computation cores (pg 1 “NoC decouples DNN operations into data movement and computation. NoCs offer parallelism as multiple neuron processing units can communicate with each other and operate simultaneously…” pg 2 “we propose a set of virtual-channel (VC) router optimization methods…and route computation parallelization”) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the neural network system of Baudart/Beran with the multi core neural network hardware described by Xiao. One would have been motivated to make such a combination because Baudart/Beran and Xiao describe neural network inference of deep models. In particular Xiao notes “experimental results show that the proposed interconnection network can efficiently manage the data traffic inside DNNs with high-throughput and low-overhead against state-of-the-art interconnects” (Xiao abstract) Claims 10 and 19 recite essentially equivalent limitations which are rejected for the reasons provided with respect to the limitations of claim 1. Regarding Claim 3/12 Baudart/Baran/Xiao teaches claim 1/10 Baran teaches, wherein the symbolic representation has configurable granularity. (pg 2 “Our ViNNSL approach, seen as a semantic language standard… By using these schemata it is possible to describe the service capabilities, semantics, functions and parameters in a client interpretable way. We call this approach Dynamic Service Evolution (DSE) because a resource can change its semantics dynamically with respect to a defined schema” the schema of symbolic representation is changed dynamically and is therefore has configurable granularity.) Regarding Claim 4/13 Baudart/Baran/Xiao teaches claim 3/12 Baudart teaches, wherein the symbolic representation comprises at least one numeric value. (pg 4 figure 7 PNG media_image5.png 514 439 media_image5.png Greyscale the symbolic representation has numerical values.) Regarding Claim 6/15 Baudart/Baran/Xiao teaches claim 1/10 Baudart teaches, evaluating the symbolic representation using input data to determine an output of the artificial neural network. ( pg 6 “This section evaluates Lale on OpenML classification tasks and on different data modalities. It also experimentally demonstrates the importance of side constraints for the optimization process. For each experiment, we specified a Lale search space and then used auto_configure to run hyperopt on it” pg 7 “Table 1 presents the results of our experiments. For each experiment, we report the test accuracy of the best pipeline found averaged over 5 runs” the pipeline which is a symbolic representation is evaluated using input date to determine output which is a measure of accuracy.) Regarding Claim 7/16 Baudart/Baran/Xiao teaches claim 1/10 Baudart teaches, comparing the symbolic representation to a ground truth string, to validate an output of the artificial neural network system (pg 3 “Check for invalid configurations early and prune them out of search spaces. Even if the search for each hyperparameter uses a valid range in isolation, their combination can violate side constraints… It is possible (with varying levels of difficulty) to incorporate these side constraints with the search space specification schemes… Custom validators would need to be written for each tool” pg 4 “The combined schema of an operator specifies the valid values along with search guidance for its latent arguments. It addresses problem P4 from Section 2, supporting automated search with a pruned search space and early error checking all from the same single source of truth” error checking from a single source of truth amounts to comparing or validating the pipeline/operator discovered to a single source of truth, i.e ground truth) Regarding Claim 9/18 Baudart/Baran/Xiao teaches claim 1/10 Baudart teaches, wherein the symbolic representation is a directed acyclic graph ( pg 3 “A pipeline is a directed acyclic graph (DAG) of operators and a pipeline is itself also an operator. Since a pipeline contains operators and is an operator, it is highly composable” as previously noted the pipeline description and its operates is the claimed symbolic representation.) Claim(s) 5, 14 and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Baudart/Baran/Xiao, further in view of Wang “Deep Neural Network Approximation for Custom Hardware: Where We’ve Been, Where We’re Going” Regarding Claim 5/14/20 Baudart/Baran/Xiao teaches claim 1/10/19 Baudart/Baran/Xiao does not explicitly teach, determining, from a number of operations in the symbolic representation, a number of cycles required for computation of the artificial neural network. Wang teaches, determining, from a number of operations in the symbolic representation, a number of cycles required for computation of the artificial neural network. (pg 12 “On GPUs, 32 one-bit activations and weights can be packed into each word to perform bit-wise XNORs. On a Titan X Pascal GPU, 32 32-bit popcounts can be issued per cycle per streaming multiprocessor (SM). Thus, up to 512 binary MAC operations can be performed per cycle per SM” a number of MAC operations in the neural network specification, i.e symbolic representation, indicates the number of cycles required in the multiprocessor for computing the neural network.) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the neural network system described by Baudart/Baran/Xiao to determine the number of cycles required for neural network computations, as described by Wang. Wang notes that a given number of operations can be performed in a GPU per cycle. One would have been motivated to make this combination because as noted by Beran, the simulation environment, enabled by integration of a unified neural network specification, allows “Grid computing resources to harness free processing cycles for the ”power-hungry“ neural network simulations.” (Beran pg 1) Conclusion Prior Art: Taha et al “Symbolic Interpretation of Artificial Neural Networks” describes extracting fuzzy rules, which are strings, from an existing MLP model Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNATHAN R GERMICK whose telephone number is (571)272-8363. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.R.G./ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Oct 22, 2020
Application Filed
Oct 18, 2023
Non-Final Rejection — §101, §103
Apr 25, 2024
Response Filed
Jun 21, 2024
Final Rejection — §101, §103
Aug 27, 2024
Response after Non-Final Action
Sep 05, 2024
Response after Non-Final Action
Sep 25, 2024
Examiner Interview Summary
Sep 25, 2024
Applicant Interview (Telephonic)
Sep 27, 2024
Request for Continued Examination
Oct 09, 2024
Response after Non-Final Action
Mar 03, 2025
Non-Final Rejection — §101, §103
Jun 18, 2025
Applicant Interview (Telephonic)
Jun 18, 2025
Examiner Interview Summary
Jul 11, 2025
Response Filed
Aug 22, 2025
Final Rejection — §101, §103
Oct 02, 2025
Interview Requested
Oct 10, 2025
Examiner Interview Summary
Oct 10, 2025
Applicant Interview (Telephonic)
Oct 31, 2025
Response after Non-Final Action
Dec 03, 2025
Request for Continued Examination
Dec 10, 2025
Response after Non-Final Action
Jan 28, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566962
DITHERED QUANTIZATION OF PARAMETERS DURING TRAINING WITH A MACHINE LEARNING TOOL
2y 5m to grant Granted Mar 03, 2026
Patent 12566983
MACHINE LEARNING CLASSIFIERS PREDICTION CONFIDENCE AND EXPLANATION
2y 5m to grant Granted Mar 03, 2026
Patent 12554977
DEEP NEURAL NETWORK FOR MATCHING ENTITIES IN SEMI-STRUCTURED DATA
2y 5m to grant Granted Feb 17, 2026
Patent 12443829
NEURAL NETWORK PROCESSING METHOD AND APPARATUS BASED ON NESTED BIT REPRESENTATION
2y 5m to grant Granted Oct 14, 2025
Patent 12443868
QUANTUM ERROR MITIGATION USING HARDWARE-FRIENDLY PROBABILISTIC ERROR CORRECTION
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
47%
Grant Probability
79%
With Interview (+32.1%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 91 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month