Prosecution Insights
Last updated: April 19, 2026
Application No. 18/034,445

Distributed multi-component synaptic computational structure

Non-Final OA §103
Filed
Apr 28, 2023
Examiner
SACKALOSKY, COREY MATTHEW
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Innatera Nanosystems B V
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
16 granted / 25 resolved
+9.0% vs TC avg
Strong +49% interview lift
Without
With
+49.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
39 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
42.0%
+2.0% vs TC avg
§103
38.0%
-2.0% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/12/2023, 03/26/2025, 09/04/2025, and 12/22/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Allowable Subject Matter Claims 5, 7, 8, 26, 30, and 37 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6, 10-12, 27-29, 31, and 32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mitra et al (S. Mitra, G. Indiveri and R. Etienne-Cummings, "Synthesis of log-domain integrators for silicon synapses with global parametric control," Proceedings of 2010 IEEE International Symposium on Circuits and Systems, Paris, France, 2010, pp. 97-100, doi: 10.1109/ISCAS.2010.5537019., hereinafter Mitra) in view of Yoon et al (US 20160042271 A1, hereinafter Yoon). Regarding Claim 1: Mitra teaches A spiking neural network comprising a plurality of presynaptic integrators, (Mitra [Page 97, section I, par. 2]: "Theoretical models of synaptic transmission have shown that a first order linear integrator with equal exponential rise and fall time is good approximation of the EPSC [9] (see Fig. 2(a)). Linear integrators can be implemented in VLSI with very few transistors when their exponential transfer function is considered.") a plurality of weight application elements (Mitra [Page 97. section I, par. 1]: " In Fig. 1, the Synaptic weight essentially provides a local control over the gain of individual EPSC. This gain can be set by a constant voltage reference, or can change with the network activity, e.g. to implement synaptic plasticity [7]."; [Figure 1 caption]: "Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants.") wherein a first group of weight application elements of the plurality of weight application elements is connected to receive the synaptic input signal from a first one of the plurality of presynaptic integrators (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants.") wherein each weight application element of the first group of weight application elements is adapted to apply a weight value to the synaptic input signal to generate a synaptic output current, wherein the strength of the synaptic output current is a function of the applied weight value (Mitra [Page 97, section I, par. 1]: "Specifically, the EPSC generation block of Fig. 1 is responsible for generating the actual synaptic current, with a biologically realistic temporal dynamics, and an amplitude proportional to its synaptic weight.") Mitra does not distinctly disclose and a plurality of output neurons; wherein each of the plurality of presynaptic integrators is adapted to receive a presynaptic pulse signal which incites accumulation of charge within the presynaptic integrator, and generate a synaptic input signal based on the accumulated charge such that the synaptic input signal has a pre-determined temporal dynamic and wherein each of the plurality of output neurons is connected to receive a synaptic output current from a second group of weight application elements of the plurality of weight application elements, and generate a spatio-temporal spike train output signal based on the received one or more synaptic output current However, Yoon teaches and a plurality of output neurons; wherein each of the plurality of presynaptic integrators is adapted to receive a presynaptic pulse signal which incites accumulation of charge within the presynaptic integrator, and generate a synaptic input signal based on the accumulated charge such that the synaptic input signal has a pre-determined temporal dynamic (Yoon [0029]: "The synapses 104 may receive output signals (i.e., spikes) from the level 102 neurons, scale those signals according to adjustable synaptic weights (where P is a total number of synaptic connections between the neurons of levels 102 and 106), and combine the scaled signals as an input signal of each neuron in the level 106. Every neuron in the level 106 may generate output spikes 110 based on the corresponding combined input signal.") and wherein each of the plurality of output neurons is connected to receive a synaptic output current from a second group of weight application elements of the plurality of weight application elements, and generate a spatio-temporal spike train output signal based on the received one or more synaptic output current (Yoon [0032]: "The signal 108 may represent an input current of the level 102 neuron. This current may be accumulated on the neuron membrane to charge a membrane potential. When the membrane potential reaches its threshold value, the neuron may fire and generate an output spike to be transferred to the next level of neurons (e.g., the level 106). In some modeling approaches, the neuron may continuously transfer a signal to the next level of neurons."; (EN): a spike train is essentially a digital sequence of information not dissimilar to a binary sequence, and could have a sequence of repeated no spikes followed by a single spike) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Mitra and Yoon before him or her, to modify the circuits for implementing silicon synapses with biologically plausible temporal dynamics and independent global control over gain and time-constant of Mitra to include the method for configuring an artificial neuron as shown in Yoon. The motivation for doing so would have been to use the method of Yoon to generate spike trains that exhibit spatio-temporal aspects (Yoon [0032]: "The signal 108 may represent an input current of the level 102 neuron. This current may be accumulated on the neuron membrane to charge a membrane potential. When the membrane potential reaches its threshold value, the neuron may fire and generate an output spike to be transferred to the next level of neurons (e.g., the level 106). In some modeling approaches, the neuron may continuously transfer a signal to the next level of neurons”). Regarding Claim 2: Mitra teaches The spiking neural network of claim 1, wherein each of the weight application elements comprises a weight application circuit comprising: a synaptic input receiver configured to receive the synaptic input signal from the presynaptic integrator and to generate a synaptic input current based on the synaptic input signal (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants."; [Page 97, section I, par. 1]: "In neuromorphic systems, a silicon synapse is typically implemented as a controlled current source that produces a post synaptic current (EPSC) upon receiving a pulsed input from a pre-synaptic neuron. Figure. 1 shows a cartoon of a synapse on top and its corresponding VLSI model below. Specifically, the EPSC generation block of Fig. 1 is responsible for generating the actual synaptic current"); a weight storage element configured to store the weight value (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants."); a modification element configured to apply the weight value stored in the weight storage element to the synaptic input current to generate the synaptic output current (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants."); Regarding Claim 3: Mitra teaches The spiking neural network of claim 2, wherein the weight value stored in the weight storage element is adjustable (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants.") Regarding Claim 6: Mitra teaches The spiking neural network of claim 1, wherein the pre-determined temporal dynamic of the synaptic input signal that the presynaptic integrator generates is an AMPA, NMDA, GABAA, or GABAn temporal dynamic (Mitra [Page 97, section I, par. 1]: "At the same time, independent control of the time-constants is necessary to model different kinds of biological synapses, such as AMPA and NMDA."). Regarding Claim 10: Mitra teaches The spiking neural network of claim 1, wherein the spiking neural network comprises a plurality of first groups of weight application elements, wherein each one of the weight application elements in each first group of weight application elements is connected to receive the same synaptic input signal from a respective presynaptic integrator, (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants.") and wherein each first group of weight application elements is connected to receive a synaptic input signal from a different one of the plurality of presynaptic integrators (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants."; (EN): it can be inferred that each entry of the synapse array gets different pre-synaptic inputs from the integrators). Regarding Claim 11: Mitra teaches The spiking neural network of claim 10, wherein the spiking neural network comprises a plurality of input neurons, wherein a respective one of the input neurons is connected to provide a presynaptic pulse signal to a respective one of the presynaptic integrators for providing a synaptic input signal for a respective first group of weight application elements (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants."). Regarding Claim 12: Mitra does not distinctly disclose The spiking neural network of claim 10, wherein the spiking neural network comprises a plurality of second groups of weight application elements, wherein each second group of weight application elements is connected to provide synaptic output signals to a different one of the plurality of output neurons. However, Yoon teaches The spiking neural network of claim 10, wherein the spiking neural network comprises a plurality of second groups of weight application elements, wherein each second group of weight application elements is connected to provide synaptic output signals to a different one of the plurality of output neurons (Yoon [0032]: "The signal 108 may represent an input current of the level 102 neuron. This current may be accumulated on the neuron membrane to charge a membrane potential. When the membrane potential reaches its threshold value, the neuron may fire and generate an output spike to be transferred to the next level of neurons (e.g., the level 106). In some modeling approaches, the neuron may continuously transfer a signal to the next level of neurons."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Mitra and Yoon before him or her, to modify the circuits for implementing silicon synapses with biologically plausible temporal dynamics and independent global control over gain and time-constant of Mitra to include the method for configuring an artificial neuron as shown in Yoon. The motivation for doing so would have been to use the method of Yoon to generate spike trains that exhibit spatio-temporal aspects (Yoon [0032]: "The signal 108 may represent an input current of the level 102 neuron. This current may be accumulated on the neuron membrane to charge a membrane potential. When the membrane potential reaches its threshold value, the neuron may fire and generate an output spike to be transferred to the next level of neurons (e.g., the level 106). In some modeling approaches, the neuron may continuously transfer a signal to the next level of neurons”). Regarding Claim 27: Mitra teaches The spiking neural network of claim 1, wherein the presynaptic integrator is configured to generate a synaptic input current for input to a plurality of the weight application elements (Mitra [Figure 1 caption]: "A cartoon of a synapse shows the generation of post-synaptic current (red) while receiving a pre-synaptic input (blue). Corresponding model of a VLSI synapse is shown below. Each synapse in the array should have a mechanism for local control of their weights (from activity dependent modification) and also global controls for their gain and time constants."). Regarding Claim 28: Due to claim language similar to that of Claim 1, claim 28 is rejected for the same reasons as presented above in the rejection of Claim 1. Regarding Claim 29: Due to claim language similar to that of Claim 2, claim 29 is rejected for the same reasons as presented above in the rejection of Claim 2. Regarding Claim 31: Due to claim language similar to that of Claim 10, claim 31 is rejected for the same reasons as presented above in the rejection of Claim 10. Regarding Claim 32: Due to claim language similar to that of Claim 11, claim 32 is rejected for the same reasons as presented above in the rejection of Claim 11. Claim Rejections - 35 USC § 103 Claim(s) 4, 9, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mitra and Yoon as applied to claims 1, and 28 above, and further in view of Benjamin et al (B. V. Benjamin et al., "Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations," in Proceedings of the IEEE, vol. 102, no. 5, pp. 699-716, May 2014, doi: 10.1109/JPROC.2014.2313565., hereinafter Benjamin). Regarding Claim 4: Mitra + Yoon does not distinctly disclose The spiking neural network of claim, wherein the network further comprises a row spike decoder configured to supply the presynaptic pulse signal on the basis of a presynaptic input spike such that the presynaptic pulse signal is allocated to the presynaptic integrator on the basis of the configuration of the spiking neural network. However, Benjamin teaches The spiking neural network of claim, wherein the network further comprises a row spike decoder configured to supply the presynaptic pulse signal on the basis of a presynaptic input spike such that the presynaptic pulse signal is allocated to the presynaptic integrator on the basis of the configuration of the spiking neural network (Benjamin [Figure 16 caption]: "Transmitter and receiver architecture. (a) Transmitter: An interface (I) relays requests from spiking neurons (S) to a row arbiter (J) and dispatches the selected row’s spikes (S) in parallel while encoding its address (Y). Another interface (I) relays the spikes from a latch to a column arbiter (J) and encodes the selected column’s address (X). A sequencer (SEQ) directs latches (A) to deliver the row address, column address(es), and a tailword (T, generated by TB) to the output port. (b) Receiver: A sequencer (SEQ) directs two different latches (A) to load incoming row (Y) and column (X) addresses, which are decoded to select a row and one or more columns. These select lines are activated simultaneously, when the tailword (T) is received, delivering spikes to the row in parallel (S). The remaining latches operate autonomously (B), automatically overwriting old data after it has been read. Small discs symbolize combinational logic."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Mitra + Yoon and Benjamin before him or her, to modify the circuits for implementing silicon synapses with biologically plausible temporal dynamics and independent global control over gain and time-constant of Mitra + Yoon to include the method for a neuromorphic system for simulating large-scale neural models in real time as shown in Benjamin. The motivation for doing so would have been to use the method of Benjamin to create specific transistor circuits for spiking neuron processing at a scale that can produce relevant simulation results (Benjamin [Page 699, section I, par. 1]: " Large-scale neural models seek to integrate experimental findings across multiple levels of investigation in order to explain how intelligent behavior arises from bioelectrical processes at spatial and temporal scales six orders of magnitude smaller (from nanometers to millimeters and from microseconds to seconds). Due to prohibitively expensive computing costs, very few models bridge this gap, failing to make behaviorally relevant predictions”). Regarding Claim 9: Mitra + Yoon does not distinctly disclose The spiking neural network of claim 1, wherein the output neurons are controlled by a neuron control signal such as to control the neuron dynamics. However, Benjamin teaches The spiking neural network of claim 1, wherein the output neurons are controlled by a neuron control signal such as to control the neuron dynamics (Benjamin [Figure 12 caption]: "Soma circuit. MEM models membrane time constant s (through Ilks), input current isin (through Ibks), and dendritic input vd(through Id; see dendrite circuit). QF models quadratic feedback v2s=2(through Ian). Kþ models high-threshold potassium conductance (through Ilkk and Ik1). Ref models reset conductance gres and refractory pulse pres (through Ilkref)"; (EN): it can be seen in the circuit diagram(s) that the various signals propagating through the circuits control the neuron outputs). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Mitra + Yoon and Benjamin before him or her, to modify the circuits for implementing silicon synapses with biologically plausible temporal dynamics and independent global control over gain and time-constant of Mitra + Yoon to include the method for a neuromorphic system for simulating large-scale neural models in real time as shown in Benjamin. The motivation for doing so would have been to use the method of Benjamin to create specific transistor circuits for spiking neuron processing at a scale that can produce relevant simulation results (Benjamin [Page 699, section I, par. 1]: " Large-scale neural models seek to integrate experimental findings across multiple levels of investigation in order to explain how intelligent behavior arises from bioelectrical processes at spatial and temporal scales six orders of magnitude smaller (from nanometers to millimeters and from microseconds to seconds). Due to prohibitively expensive computing costs, very few models bridge this gap, failing to make behaviorally relevant predictions”). Regarding Claim 13: Mitra + Yoon does not distinctly disclose The spiking neural network of claim 1, wherein the spiking neural network displays a range of pattern activity in use, comprising full synchrony, cluster or asynchronous states, heterogeneities in the input patterns, neurosynaptic elements spatio-temporal dynamics, non-linear spiking behaviour and/or frequency adaptability. However, Benjamin teaches The spiking neural network of claim 1, wherein the spiking neural network displays a range of pattern activity in use, comprising full synchrony, cluster or asynchronous states, heterogeneities in the input patterns, neurosynaptic elements spatio-temporal dynamics, non-linear spiking behaviour and/or frequency adaptability (Benjamin [Figure 19 caption]: "Simulating a million neurons. The neurons were organized into fifteen 256x256 cell layers, arranged in a ring, so the first and last layers are nearest neighbors. Each cell layer’s neurons inhibit neighboring neurons in its layer as well as in three neighboring layers to either side (the central layer’s connectivity is shown). Spike rasters (from a tenth of each layer’s neurons) reveal global synchrony, as expected from the network’s recurrent inhibition."). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Mitra + Yoon and Benjamin before him or her, to modify the circuits for implementing silicon synapses with biologically plausible temporal dynamics and independent global control over gain and time-constant of Mitra + Yoon to include the method for a neuromorphic system for simulating large-scale neural models in real time as shown in Benjamin. The motivation for doing so would have been to use the method of Benjamin to create specific transistor circuits for spiking neuron processing at a scale that can produce relevant simulation results (Benjamin [Page 699, section I, par. 1]: " Large-scale neural models seek to integrate experimental findings across multiple levels of investigation in order to explain how intelligent behavior arises from bioelectrical processes at spatial and temporal scales six orders of magnitude smaller (from nanometers to millimeters and from microseconds to seconds). Due to prohibitively expensive computing costs, very few models bridge this gap, failing to make behaviorally relevant predictions”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 11630993 B2 – Artificial Neuron For Neuromorphic Chip With Resistive Synapses US 20210073622 A1 – Methods, apparatuses, and systems for in- or near-memory processing US 9111224 B2 – a technique for neural learning of natural multi-spike trains in spiking neural networks US 20140032460 A1 – A spike domain asynchronous neuron circuit US 20120011091 A1 – techniques for power efficient implementation of neuron synapses with positive and/or negative synaptic weights US 20050090756 A1 – a neural spike detection system Bartolozzi, Chiara & Indiveri, Giacomo. (2007). Synaptic Dynamics in Analog VLSI. Neural Computation. 19. 2581-2603. 10.1162/neco.2007.19.10.2581. – experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties S. Tamura, Y. Nishitani, C. Hosokawa and Y. Mizuno-Matsumoto, "Asynchronous Multiplex Communication Channels in 2-D Neural Network With Fluctuating Characteristics," in IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 8, pp. 2336-2345, Aug. 2019, doi: 10.1109/TNNLS.2018.2880565. – we show that several asynchronous multiplex communication channels can be established in a 2-D mesh neural network with randomly generated weights between eight neighbors Any inquiry concerning this communication or earlier communications from the examiner should be directed to COREY M SACKALOSKY whose telephone number is (703)756-1590. The examiner can normally be reached M-F 7:30am-3:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COREY M SACKALOSKY/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Apr 28, 2023
Application Filed
Feb 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596932
METHOD AND SYSTEM FOR DEPLOYMENT OF PREDICTION MODELS USING SKETCHES GENERATED THROUGH DISTRIBUTED DATA DISTILLATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591759
PARALLEL AND DISTRIBUTED PROCESSING OF PROPOSITIONAL LOGICAL NEURAL NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12572441
FULLY UNSUPERVISED PIPELINE FOR CLUSTERING ANOMALIES DETECTED IN COMPUTERIZED SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12518197
INCREMENTAL LEARNING WITHOUT FORGETTING FOR CLASSIFICATION AND DETECTION MODELS
2y 5m to grant Granted Jan 06, 2026
Patent 12487763
METHOD AND APPARATUS WITH MEMORY MANAGEMENT AND NEURAL NETWORK OPERATION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+49.4%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month