Prosecution Insights
Last updated: April 19, 2026
Application No. 18/036,561

NEUROMORPHIC COMPUTER SUPPORTING BILLIONS OF NEURONS

Non-Final OA §101§102§103§112
Filed
May 11, 2023
Examiner
GONZALES, VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
ZHEJIANG UNIVERSITY
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
89%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
410 granted / 522 resolved
+23.5% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
26 currently pending
Career history
548
Total Applications
across all art units

Statute-Specific Performance

§101
21.2%
-18.8% vs TC avg
§103
39.9%
-0.1% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
14.6%
-25.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 522 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is written in response to the application filed 5/11/23. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. In determining whether the claims are subject matter eligible, the Examiner applies guidance from MPEP § 2106. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? No—claim 1 recites “A neuromorphic computer” comprising components—namely “hierarchical extended architecture” and “algorithmic process control”) which are not clearly hardware components. Thus, the broadest reasonable interpretation of the recited ‘computer’ encompasses software per se embodiments, which do not fall into any one of the four statutory categories. Therefore, claim 1 is not patent eligible. Dependent claims 2-20 are rejected for the same reason. Claim Interpretation - 35 USC § 112(f) The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. - An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: In claim 1: “hierarchical extended architecture”, and “algorithmic process control”. In claim 2: “primary organization management”, “secondary organization management”, and “tertiary organization management”. Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recites sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 112(b) - Indefiniteness The following is a quotation of the second paragraph of 35 U.S.C. 112: (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. As noted above, the terms “hierarchical extended architecture” and “algorithmic process control” in claim 1 (as well as primary/secondary/tertiary organization management in claim 2) are being interpreted under §112(f). In cases involving a special purpose computer-implemented means-plus-function limitation, the Federal Circuit has consistently required that the structure be more than simply a general purpose computer or microprocessor and that the specification must disclose an algorithm for performing the claimed function, which, by definition, must contain a sequence of steps. See MPEP 2181(B)(II). However, the Applicant discloses no such algorithm for achieving the functionality recited in these limitations. Therefore, the claim is rejected under §112(b) as being indefinite. The Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f); (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). For the above reasons, claim 1 is rejected as being indefinite. This rejection applies equally to dependent claims 2-20. Furthermore, dependent claim 3 specifies that the “tertiary organization management” uses “high-speed asynchronous interface communication”, whereas its parent claim 2 specifies “ultra high-speed communication”. This mismatch introduces uncertainty as to the scope of the claim, in view of the Applicant’s three specified communication speed levels (see generally spec. pp. 7-8 discussing low-speed, high-speed and ultra high-speed communications). Therefore claim 3 is indefinite. Dependent claims 10, 13, 16 and 19 inherit this deficiency from claim 3. Furthermore, dependent claim 4 recites “the Network On Chip routing unit”. This term lacks antecedent basis, and consequently the claim is indefinite. Dependent claims 11, 14, 17 and 20 inherit this deficiency from claim 4. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (b) the invention was patented or described in a printed publication in this or a foreign country or in public use or on sale in this country, more than one year prior to the date of application for patent in the United States. Claims 1-2, 5, 7 and 9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Modha (US 2019/0080229 A1, cited by Applicant in IDS dated 5/11/23). Regarding claim 1, Modha discloses a neuromorphic computer supporting billions of neurons, comprising hierarchical extended architecture and algorithmic process control within the architecture; the architecture comprises multiple neuromorphic computing chips with hierarchical organization management for implementing computing tasks, each containing computing neurons and synaptic resources and forming a neural network, spike events between computing neurons within the architecture are transmitted through a hierarchical transmission mode; [0003] “Neuromorphic and synaptronic computation, also referred to as artificial neural networks, are computational systems that permit electronic systems to essentially function in a manner analogous to that of biological brains. Neuromorphic and synaptronic computation do not generally utilize the traditional digital model of manipulating 0s and 1s. Instead, neuromorphic and synaptronic computation create connections between processing elements that are roughly functionally equivalent to neurons of a biological brain. Neuromorphic and synaptronic computation may comprise various electronic circuits that are modeled on biological neurons.” (Emphasis added.) [0029] “The hierarchical organization of the symmetric core circuits comprises multiple chip structures, each chip structure comprising a plurality of symmetric core circuits. The event routing system further comprises, for each chip structure, a chip-to-chip lookup table configured to determine target chip structures containing target axons for neuronal firing events generated by neurons in the chip structure, and a chip-to-chip packet switch configured to direct the neuronal firing events to the target chip structures containing the target axons.” (Emphasis added.) [0030] “The hierarchical organization of the symmetric core circuits further comprises multiple board structures, each board structure comprising a plurality of chip structures. The event routing system further comprises, for each board structure, a board-to-board lookup table configured to determine target board structures containing target axons for neuronal firing events generated by neurons in said board structure, and a board-to-board packet switch configured to direct the neuronal firing events to the target board structures containing the target axons.” (Emphasis added.) the algorithmic process control comprises controlling parallel processing of computing tasks within the architecture, controlling management of synchronization time within the architecture, and controlling reconstruction of neural networks within the architecture to achieve fault tolerance and robust management of computing neurons and synaptic resources. [0004] “In biological systems, the point of contact between an axon of a neural module and a dendrite on another neuron is called a synapse, and with respect to the synapse, the two neurons are respectively called pre-synaptic and post-synaptic. The essence of our individual experiences is stored in conductance of the synapses. The synaptic conductance changes with time as a function of the relative spike times of pre-synaptic and post-synaptic neurons, as per spike-timing dependent plasticity (STDP). The STDP rule increases the conductance of a synapse if its post-synaptic neuron fires after its pre-synaptic neuron fires, and decreases the conductance of a synapse if the order of the two firings is reversed.” (Emphasis added.) Claim 2, Modha discloses the further limitations wherein, the architecture adopts a three-level hierarchical organization management approach, comprising: primary organization management: the architecture comprises multiple neuromorphic computing nodes organized in a tree topology, and low-speed communication is used between various neuromorphic computing nodes; ‘tree topology’ :: [0029] “The hierarchical organization of the symmetric core circuits comprises multiple chip structures”. The Examiner interprets “low-speed communication” according to its broadest reasonable interpretation in view of the Applicant’s specification. This term is not defined by the Applicant, and the applicant give not meaningful guidance as to its scope, besides being relatively slower than a “high-speed communication” recited below. ‘low-speed communication’ :: [0091] “Examples of communication interface 324 may include a modem, a network interface (such as an Ethernet card), a communication port, or a PCMCIA slot and card, etc. Software and data transferred via communication interface 324 are in the form of signals which may be, for example, electronic, electromagnetic, optical, or other signals capable of being received by communication interface 324.” secondary organization management: each neuromorphic computing node comprises multiple cascade chips organized in a grid topology, and high-speed communication is used between the cascade chips; and Fig. 2 (reproduced below), illustrating a 6x8 grid array of neuromorphic computing nodes. PNG media_image1.png 442 540 media_image1.png Greyscale ‘high-speed communications’ :: [0049] “Each core module 10 utilizes its core-to-core PSw 55 (FIG. 1) to pass along neuronal firing events in the eastbound, westbound, northbound, or southbound direction.” tertiary organization management: each cascade chip contains multiple neuromorphic computing chips organized in a matrix array structure, and ultra high-speed communication is used between the neuromorphic computing chips. [0050] “In one embodiment, the hierarchical organization of the core modules 10 comprises multiple chip structures 100 (FIG. 3), each chip structure 100 comprising a plurality of core modules 10.” ‘ultra high-speed communications’ :: [0051] “The chip structure 100 further comprises a chip-to-core address-event receiver (Chip-to-Core) 104, a core-to-chip address-event transmitter (Core-to-Chip) 105”. Regarding claim 5, Modha discloses the further limitation wherein, based on the architecture, multiple computing tasks are controlled to be mapped to multiple computing neuromorphic computing nodes for parallel execution, and each computing neuromorphic computing node independently executes the assigned computing task. [0034] “ Embodiments of the invention further provide a neural network circuit that provides locality and massive parallelism to enable a low-power, compact hardware implementation.” (Emphasis added.) Regarding claim 7, Modha discloses the further limitation wherein, based on the architecture, the same computing task mapped to multiple neuromorphic computing nodes is transformed into a single neuromorphic computing node by reconstructing the neural network structure, and the computing task is completed by a single neuromorphic computing node, achieving robust management of computing neurons and synaptic resources. [0034] “ Embodiments of the invention further provide a neural network circuit that provides locality and massive parallelism to enable a low-power, compact hardware implementation.” (Emphasis added.) See also claim limitation mapping for claim 1, illustrating a hierarchical arrangement of processing components. The Examiner notes that the limitation “achieving robust management of computing neurons and synaptic resources” is merely an intended result that does not further limit the recited neuromorphic computer. Regarding claim 9, Modha discloses the further limitation wherein, based on the architecture, multiple computing tasks are controlled to be mapped to multiple computing neuromorphic computing nodes for parallel execution, and each computing neuromorphic computing node independently executes the assigned computing task. [0034] “ Embodiments of the invention further provide a neural network circuit that provides locality and massive parallelism to enable a low-power, compact hardware implementation.” (Emphasis added.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. The following are the references relied upon in the rejections below: Modha, primary reference (US 2019/0080229 A1, cited by Applicant in IDS dated 5/11/23) Amir (US 2019/0266481 A1) Claims 3, 6, 8, 10, 12-13, 15-16 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Modha and Amir. Regarding claim 3, Modha discloses the further limitation wherein, for primary organization management, Ethernet communication is used between various neuromorphic computing nodes; [0091] Ethernet. Amir discloses the following further limitation which Modha does not disclose: for secondary organization management, field programmable gate array (FPGA) communication mode is adopted between all cascade chips; [0025] “Event-based neuromorphic chips may be implemented using asynchronous logic circuits, driven by their input events and internal events, without a global periodic clock. Such systems may provide the benefit of reduced power consumption and faster processing. In some such embodiments, one or more neuromorphic chips are coupled to other components, such as a CPU, Memory, FPGA, or communication interfaces, to create a neuromorphic-enabled system.” (Emphasis added.) for tertiary organization management, high-speed asynchronous interface communication is adopted between various neuromorphic computing chips. [0025] “Event-based neuromorphic chips may be implemented using asynchronous logic circuits, driven by their input events and internal events, without a global periodic clock. Such systems may provide the benefit of reduced power consumption and faster processing. In some such embodiments, one or more neuromorphic chips are coupled to other components, such as a CPU, Memory, FPGA, or communication interfaces, to create a neuromorphic-enabled system.” (Emphasis added.) At the time of filing, it would have been obvious to a person of ordinary skill to employ both FPGA and asynchronous communication protocols (as taught by Amir) in combination with the Modha system because these are both widely-used protocols capable of handling the high bandwidth requirements between neuromorphic cores. Both disclosures pertain to neuromorphic computing architectures. Claim 6, Amir discloses the following further limitations which Modha does not disclose wherein, based on the architecture, controlling various hierarchical organization management using asynchronous event driven working mechanisms to achieve synchronous management, ensuring the asynchronous progress of different computing tasks; [0025] “Event-based neuromorphic chips may be implemented using asynchronous logic circuits, driven by their input events and internal events, without a global periodic clock. Such systems may provide the benefit of reduced power consumption and faster processing. In some such embodiments, one or more neuromorphic chips are coupled to other components, such as a CPU, Memory, FPGA, or communication interfaces, to create a neuromorphic-enabled system.” (Emphasis added.) simultaneously controlling the entire architecture using global synchronization signals to ensure time synchronization management of the same computing task. [0039] “To ensure correct computation for any possible connectivity graph, all cores and neurons should complete their computation for tick t before any core can start computing tick t+1. A global time barrier method may be applied in the form of an external signal, which may be denoted as the T1 clock. The signal is sent to mark the beginning of a tick, and its originator (which may off-chip) ensures that all cores have completed their computation of the previous tick. Since this internal chip information is not readily available outside of the chip, one strategy is to wait until no more output is observed at the chip outputs, or to wait a predetermined time (e.g., a millisecond).” The obviousness analysis of claim 3 applies equally here. Regarding claim 8, Amir discloses the following further limitation which Modha does not disclose wherein, based on the architecture, when a neuromorphic computing node executing a computing task occurs fault, controlling the conversion of the computing task executed by the faulty neuromorphic computing node to a backup neuromorphic computing node by reconstructing the neural network structure, achieving fault tolerance management of computing neurons and synaptic resources. [0067] “In some embodiments, multiple systems are allocated for a given partition. In this way, fault tolerance may be provided. Communication may be split at each output port and then converged on the input port. Error detection may be implemented on each input, such as comparing the spikes from two different boards to detect divergences, or triple-mode redundancy.” At the time of filing, it would have been obvious to a person of ordinary skill to apply the fault tolerance technique disclosed by Amir to the Modha system. Fault tolerance is an integral part to most modern (ie as of the time of filing) high-powered computing systems because small processing components are vulnerable to noise / faults. Regarding claim 10, Modha discloses the further limitation wherein, based on the architecture, multiple computing tasks are controlled to be mapped to multiple computing neuromorphic computing nodes for parallel execution, and each computing neuromorphic computing node independently executes the assigned computing task. [0034] “ Embodiments of the invention further provide a neural network circuit that provides locality and massive parallelism to enable a low-power, compact hardware implementation.” (Emphasis added.) Regarding claim 12, Amir discloses the following further limitation which Modha does not disclose wherein, based on the architecture, controlling various hierarchical organization management using asynchronous event driven working mechanisms to achieve synchronous management, ensuring the asynchronous progress of different computing tasks; [0025] “Event-based neuromorphic chips may be implemented using asynchronous logic circuits, driven by their input events and internal events, without a global periodic clock. Such systems may provide the benefit of reduced power consumption and faster processing. In some such embodiments, one or more neuromorphic chips are coupled to other components, such as a CPU, Memory, FPGA, or communication interfaces, to create a neuromorphic-enabled system.” (Emphasis added.) simultaneously controlling the entire architecture using global synchronization signals to ensure time synchronization management of the same computing task. [0039] “To ensure correct computation for any possible connectivity graph, all cores and neurons should complete their computation for tick t before any core can start computing tick t+1. A global time barrier method may be applied in the form of an external signal, which may be denoted as the T1 clock. The signal is sent to mark the beginning of a tick, and its originator (which may off-chip) ensures that all cores have completed their computation of the previous tick. Since this internal chip information is not readily available outside of the chip, one strategy is to wait until no more output is observed at the chip outputs, or to wait a predetermined time (e.g., a millisecond).” (Emphasis added.) Regarding claim 13, Amir discloses the following further limitations which Modha does not disclose wherein, based on the architecture, controlling various hierarchical organization management using asynchronous event driven working mechanisms to achieve synchronous management, ensuring the asynchronous progress of different computing tasks; [0025] “Event-based neuromorphic chips may be implemented using asynchronous logic circuits, driven by their input events and internal events, without a global periodic clock. Such systems may provide the benefit of reduced power consumption and faster processing. In some such embodiments, one or more neuromorphic chips are coupled to other components, such as a CPU, Memory, FPGA, or communication interfaces, to create a neuromorphic-enabled system.” (Emphasis added.) simultaneously controlling the entire architecture using global synchronization signals to ensure time synchronization management of the same computing task. [0039] “To ensure correct computation for any possible connectivity graph, all cores and neurons should complete their computation for tick t before any core can start computing tick t+1. A global time barrier method may be applied in the form of an external signal, which may be denoted as the T1 clock. The signal is sent to mark the beginning of a tick, and its originator (which may off-chip) ensures that all cores have completed their computation of the previous tick. Since this internal chip information is not readily available outside of the chip, one strategy is to wait until no more output is observed at the chip outputs, or to wait a predetermined time (e.g., a millisecond).” (Emphasis added.) Regarding claim 15, Amir discloses the following further limitation which Modha does not disclose wherein, based on the architecture, the same computing task mapped to multiple neuromorphic computing nodes is transformed into a single neuromorphic computing node by reconstructing the neural network structure, and the computing task is completed by a single neuromorphic computing node, achieving robust management of computing neurons and synaptic resources. [0067] “In some embodiments, multiple systems are allocated for a given partition. In this way, fault tolerance may be provided. Communication may be split at each output port and then converged on the input port. Error detection may be implemented on each input, such as comparing the spikes from two different boards to detect divergences, or triple-mode redundancy.” Regarding claim 16, Amir discloses the following further limitation which Modha does not disclose wherein, based on the architecture, the same computing task mapped to multiple neuromorphic computing nodes is transformed into a single neuromorphic computing node by reconstructing the neural network structure, and the computing task is completed by a single neuromorphic computing node, achieving robust management of computing neurons and synaptic resources. [0067] “In some embodiments, multiple systems are allocated for a given partition. In this way, fault tolerance may be provided. Communication may be split at each output port and then converged on the input port. Error detection may be implemented on each input, such as comparing the spikes from two different boards to detect divergences, or triple-mode redundancy.” Regarding claim 18, Amir discloses the following further limitation which Modha does not disclose wherein, based on the architecture, when a neuromorphic computing node executing a computing task occurs fault, controlling the conversion of the computing task executed by the faulty neuromorphic computing node to a backup neuromorphic computing node by reconstructing the neural network structure, achieving fault tolerance management of computing neurons and synaptic resources. [0067] “In some embodiments, multiple systems are allocated for a given partition. In this way, fault tolerance may be provided. Communication may be split at each output port and then converged on the input port. Error detection may be implemented on each input, such as comparing the spikes from two different boards to detect divergences, or triple-mode redundancy.” The obviousness analysis of claim 8 applies equally here. Regarding claim 19, Amir discloses the following further limitation which Modha does not disclose wherein, based on the architecture, when a neuromorphic computing node executing a computing task occurs fault, controlling the conversion of the computing task executed by the faulty neuromorphic computing node to a backup neuromorphic computing node by reconstructing the neural network structure, achieving fault tolerance management of computing neurons and synaptic resources. [0067] “In some embodiments, multiple systems are allocated for a given partition. In this way, fault tolerance may be provided. Communication may be split at each output port and then converged on the input port. Error detection may be implemented on each input, such as comparing the spikes from two different boards to detect divergences, or triple-mode redundancy.” The obviousness analysis of claim 8 applies equally here. Additional Relevant Prior Art and Allowable Subject Matter The following references were identified by the Examiner as being relevant to the disclosed invention, but are not relied upon in any particular prior art rejection: Mavrovouniotis discloses a hierarchical neural network system. (See p. 41, fig. 3 and passim). However, the NNs disclosed therein are not spiking neural networks, and the disclosure does not extend to the architectural features recited in claim 1. (Mavrovouniotis ML, Chang S. Hierarchical neural networks. Computers & chemical engineering. 1992 Apr 1; 16(4):347-69.) Claims 4, 11, 14, 17 and 20 are allowable over the prior art, but are rejected under §§ 101 and 112 as set forth supra. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vincent Gonzales whose telephone number is (571) 270-3837. The examiner can normally be reached on Monday-Friday 7 a.m. to 4 p.m. MT. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang, can be reached at (571) 270-7092. Information regarding the status of an application may be obtained from the USPTO Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. /Vincent Gonzales/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585920
PREDICTING OPTIMAL PARAMETERS FOR PHYSICAL DESIGN SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12580040
DIFFUSION MODEL FOR GENERATIVE PROTEIN DESIGN
2y 5m to grant Granted Mar 17, 2026
Patent 12566984
METHODS AND SYSTEMS FOR EXPLAINING ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561402
IDENTIFICATION OF A SECTION OF BODILY TISSUE FOR PATHOLOGY TESTS
2y 5m to grant Granted Feb 24, 2026
Patent 12547647
Unsupervised Machine Learning System to Automate Functions On a Graph Structure
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
89%
With Interview (+10.5%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 522 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month