Prosecution Insights
Last updated: April 19, 2026
Application No. 17/850,691

PARAFINITARY NEURAL LEARNING

Final Rejection §101§103
Filed
Jun 27, 2022
Examiner
BOSTWICK, SIDNEY VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
UNIVERSITY OF GEORGIA RESEARCH FOUNDATION, INC.
OA Round
2 (Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
71 granted / 136 resolved
-2.8% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
68 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This Office Action is responsive to Applicants' Amendment filed on December 4, 2025, in which claims 1-3, 5-7, and 9-12 are currently amended. Claims 1-12 are currently pending. Specification Applicant's amendments made to the specification are acknowledged. Examiner’s objection to the specification are hereby withdrawn, as necessitated by Applicant’s amendments made to the specification. Response to Arguments The rejections to claims 1-12 under 35 U.S.C. § 112(b) are hereby withdrawn, as necessitated by applicant's amendments and remarks made to the rejections. Applicant’s arguments with respect to rejection of claims 1-12 under 35 U.S.C. 101 based on amendment have been considered, however, are not persuasive. With respect to Applicant's arguments on pp. 9-10 of the Remarks submitted 12/4/2025 in view of Ex Parte Desjardins, Examiner notes that Applicant's arguments are moot as the instant application does not recite training a neural network nor is it of remotely similar scope. With respect to Applicant's arguments on pp. 10-11 of the Remarks submitted 12/4/2025 that the instant specification recites a technical improvement, Examiner notes that even if, for the sake of argument, the instant specification does disclose a technical improvement (which Examiner does not necessarily agree), the instant claims scope is not limited in a way that reflects the scope of the recited portions of the instant specification. Examiner also notes that the claims themselves, aside from the recitation of generic computer components to apply the judicial exception, are directed entirely to a judicial exception. Examiner notes MPEP 2106.05(a) "It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements." and "An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome.". The prior art cited by the Examiner shows objectively that an artificial neural network is itself a mathematical model such that merely improving a generic artificial neural network model as claimed amounts to improving a known mathematical model. With respect to Applicant's arguments directed towards a self-referential database on p. 11 of the Remarks submitted 12/4/2025, it's not clear what argument Applicant is trying to make nor is it clear what the relationship between a self-referential database and a mathematical model of an artificial neural network is. For at least these reasons and those further detailed below, Examiner asserts that it is appropriate to maintain the rejection under 35 U.S.C. 101. Applicant’s arguments with respect to rejection of claims 1-12 under 35 U.S.C. 103 based on amendment have been considered and are persuasive. The argument is moot in view of a new ground of rejection set forth below. Claim Rejections - 35 USC § 101 101 Rejection 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 USC § 101 because the claimed invention is directed to non-statutory subject matter. Regarding Claim 1: Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 Analysis: Claim 1 is directed to an non-transitory computer-readable recording medium having stored therein a program that causes a computer to execute a process, which is directed to a product, one of the statutory categories. Step 2A Prong One Analysis: Claim 1 under its broadest reasonable interpretation is a series of mental processes. For example, but for the generic computer components language, the above limitations in the context of this claim encompass neural network processing, including the following: determine that the input is outside the input domain for the node of the neural network (observation, evaluation, and judgement), create a second node and add the second node to the neural network in a layer that the first node is in, in response to the input being received, the second node having the same edges and edge weights as the first node (observation, evaluation, and judgement. See Gartzman “Neural Networks for Dummies”, 2018 "A neuron, in the context of Neural Networks, is a fancy name that smart-alecky people use when they are too fancy to say function. A function, in the context of mathematics and computer science, is a fancy name for something that takes some input, applies some logic and outputs the result." Examiner notes that one of ordinary skill in the art could readily create a function having inputs and outputs entirely in the mind, even if abstracted as an artificial neural network which is itself a mathematical concept.) scale down each incoming edge of the first node (observation, evaluation, and judgement) scale down each incoming edge of the second node (observation, evaluation, and judgement) scale up each outgoing edge of the second node (observation, evaluation, and judgement) Therefore, claim 1 recites an abstract idea which is a judicial exception. Step 2A Prong Two Analysis: Claim 1 recites additional elements “a computing device comprising a processor and a memory” and “machine-readable instructions stored in the memory that, when executed by the processor, cause the computing device to at least”. However, these additional features are computer components recited at a high-level of generality, such that they amount to no more than mere instructions to apply the judicial exception using a generic computer component. An additional element that merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, does not integrate the judicial exception into a practical application. Claim 1 also recites additional elements “receive an input for a first node of a neural network” which amounts to gathering and outputting data which is insignificant extra-solution activity (See MPEP 2106.05(g)). Therefore, claim 1 is directed to a judicial exception. Step 2B Analysis: Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the lack of integration of the abstract idea into a practical application, the additional elements recited in claim 1 amount to no more than mere instructions to apply the judicial exception using a generic computer component and insignificant extra-solution activity. The gathering and outputting of data is considered well-understood, routine, and conventional in the art (See MPEP 2106.05(d)(II)(i) buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claims 5 and 9, which recite a method and a computer program product, respectively, as well as to dependent claims 2-4, 6-8, and 10-12. The additional limitations of the dependent claims are addressed briefly below: Dependent claims 2, 6, and 10 recite additional observation, evaluation, and judgement “wherein each incoming edge of the first node is scaled by a multiplicative factor of ϕ-2, wherein ϕ represents the Golden Ratio.” Dependent claims 3, 7, and 11 recite additional observation, evaluation, and judgement “wherein each incoming edge of the second node is scaled by a multiplicative factor of ϕ −1, wherein ϕ represents the Golden Ratio.” Dependent claims 4, 8, and 12 recite additional observation, evaluation, and judgement “wherein each outgoing edge of the second node is scaled up by a factor of ϕ, wherein ϕ represents the Golden Ratio” Therefore, when considering the elements separately and in combination, they do not add significantly more to the inventive concept. Accordingly, claims 1-12 are rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5, and 9 are rejected under U.S.C. §103 as being unpatentable over the combination of Zhao (“Improving Neural Network Quantization without Retraining using Outlier Channel Splitting”, 2019) and Neyshabur (“Norm-Based Capacity Control in Neural Networks”, 2015). PNG media_image1.png 536 854 media_image1.png Greyscale FIG. 2a of Zhao Regarding claim 1, Zhao teaches A system, comprising: a computing device comprising a processor and a memory; and machine-readable instructions stored in the memory that, when executed by the processor, cause the computing device to at least:([Abstract] "Quantization can improve the execution latency and energy efficiency of neural networks on both commodity GPUs and specialized accelerators") receive an input for a first node of a neural network;([p. 3 §3.2] "consider a linear layer in a DNN which takes as input the m-channel activation vector x") determine that the input is outside an input domain for the node of the neural network;([p. 3 §3.2] "The core idea of OCS is to reduce the magnitude of outlier weights and/or activations in a DNN layer by duplicating a neuron" [p. 4 §3.4] "OSC performs splits one at a time, and always splits the channel containing the largest absolute value in the layer. By prioritizing channels containing the largest values, OCS seeks to minimize distortion caused by any subsequent clipping" Outlier interpreted as input (channel activation) outside an input domain for the node of the neural network) create a second node and add the second node to the neural network in a layer that the first node is in, in response to the input being received, the second node having the same edges and edge weights as the first node;([p. 3 §3.2] "The core idea of OCS is to reduce the magnitude of outlier weights and/or activations in a DNN layer by duplicating a neuron" [p. 4] "after duplicating a neuron, we can divide either the neuron's output value or its outgoing weights in half to preserve functional equivalence") scale down each incoming edge of the first node;([p. 3] "We split m into 2 channels. To preserve equivalence, we can halve the weights (Equation 3) or halve the input activations (Equation 4)") scale down each incoming edge of the second node; and([p. 3] "We split m into 2 channels. To preserve equivalence, we can halve the weights (Equation 3) or halve the input activations (Equation 4)"). while it would be obvious to one of ordinary skill in the art that scaling down the incoming edges (activations) is functionally equivalent to scaling up the outgoing edge and from a relative perspective the outgoing edge of the second node is necessarily relatively scaled by scaling down the incoming edge, Zhao does not explicitly teach scale up each outgoing edge of the second node. Neyshabur, in the same field of endeavor, teaches scale up each outgoing edge of the second node. ([p. 3] "The RELU has several convenient properties which we will exploit, some of them shared with other activation functions [...] Non-Negative Homogeneity: This property is important as it allows us to scale the incoming weights to a unit by c > 0 and scale the outgoing edges by 1/c without changing the function computed by the network. For layered graphs, this means we can scale Wi by c and compensate by scaling Wi+1 by 1/c."). Zhao as well as Neyshabur are directed towards scaling neural networks. Therefore, Zhao as well as Neyshabur are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Zhao with the teachings of Neyshabur by scaling up an outgoing edge when scaling down an incoming edge to maintain non-negative homogeneity. Neyshabur provides as additional motivation for combination that this scaling maintains the function computed by the network ([p. 3]). This motivation for combination also applies to the remaining claims which depend on this combination. Regarding claim 5, claim 5 is directed towards the method performed by the system of claim 1. Therefore, the rejection applied to claim 1 also applies to claim 5. Regarding claim 9, claim 9 is substantially similar to claim 1. Therefore, the rejection applied to claim 1 also applies to claim 9. Claims 2, 3, 6, 7, 10, and 11 are rejected under U.S.C. §103 as being unpatentable over the combination of Zhao and Neyshabur and in further view of Franks (US7987191B2). Regarding claim 2, the combination of Zhao and Neyshabur teaches The system of claim 1. However, the combination of Zhao and Neyshabur doesn't explicitly teach, wherein each incoming edge of the first node is scaled by a multiplicative factor of ϕ-2, wherein ϕ represents the Golden Ratio. Franks, in the same field of endeavor, teaches The system of claim 1, wherein each incoming edge of the first node is scaled by a multiplicative factor of ϕ-2, wherein ϕ represents the Golden Ratio.([Col. 9 l. 45-Col. 10 l. 25] "A distance score 533 may be calculated by any number of well known methods. Furthermore, in order to give greater value to associated terms in closer proximity to a core term, the distance score values 533 assigned to associated terms as their distance to the core term increases may advantageously be decayed. This may advantageously be applied using the Fibonacci sequence in reverse. In other words, in one embodiment using the Fibonacci sequence in reverse, the distance score from the core term to an associated term is: Sij=φΔx, where: Sij=distance score between core term i and associated term j, φ=0.618 is the Golden Ratio component “phi”†, and Δx=|xi−xj| is the relative position between core term i and associated term j. †φ is the decimal component of the Golden Ratio φ=1.618034 [...] the distance score 536 using this equation for the associated term “cardinal” to the term “red,” which are neighboring terms (Δx=1), is 0.618=0.6181. Similarly, the distance score 537 for the associated term “bloom” to the term “red” is 0.008=0.61810, since “bloom” is ten terms away from “red” (Δx=10)" Franks explicitly discloses that φ=.618= one divided by the golden ratio 1/(1.618) so φ^(Δx)=(Φ^-1)^(Δx)=(Φ^-Δx)). The combination of Zhao and Neyshabur as well as Franks are directed towards graph relationships. Therefore, the combination of Zhao and Neyshabur as well as Franks are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Zhao and Neyshabur with the teachings of Franks by using the golden ratio as a scaling factor. Zhao already teaches selecting and using a scaling factor and Franks teaches a specific, explicit alternative attenuation family for graph relationships and shows that increasing delta yields progressively smaller contributions such that the use of the golden ratio as the scaling function would lead to obvious and expected results. Regarding claim 3, the combination of Zhao and Neyshabur teaches The system of claim 1. However, the combination of Zhao and Neyshabur doesn't explicitly teach wherein each incoming edge of the second node is scaled by a multiplicative factor of ϕ−1, wherein ϕ represents the Golden Ratio. Franks, in the same field of endeavor, teaches each incoming edge of the second node is scaled by a multiplicative factor of ϕ−1, wherein ϕ represents the Golden Ratio. ([Col. 9 l. 45-Col. 10 l. 25] "A distance score 533 may be calculated by any number of well known methods. Furthermore, in order to give greater value to associated terms in closer proximity to a core term, the distance score values 533 assigned to associated terms as their distance to the core term increases may advantageously be decayed. This may advantageously be applied using the Fibonacci sequence in reverse. In other words, in one embodiment using the Fibonacci sequence in reverse, the distance score from the core term to an associated term is: Sij=φΔx, where: Sij=distance score between core term i and associated term j, φ=0.618 is the Golden Ratio component “phi”†, and Δx=|xi−xj| is the relative position between core term i and associated term j. †φ is the decimal component of the Golden Ratio φ=1.618034 [...] the distance score 536 using this equation for the associated term “cardinal” to the term “red,” which are neighboring terms (Δx=1), is 0.618=0.6181. Similarly, the distance score 537 for the associated term “bloom” to the term “red” is 0.008=0.61810, since “bloom” is ten terms away from “red” (Δx=10)" Franks explicitly discloses that φ=.618= one divided by the golden ratio 1/(1.618) so φ^(Δx)=(Φ^-1)^(Δx)=(Φ^-Δx)). The combination of Zhao and Neyshabur as well as Franks are directed towards graph relationships. Therefore, the combination of Zhao and Neyshabur as well as Franks are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Zhao and Neyshabur with the teachings of Franks by using the golden ratio as a scaling factor. Zhao already teaches selecting and using a scaling factor and Franks teaches a specific, explicit alternative attenuation family for graph relationships and shows that increasing delta yields progressively smaller contributions such that the use of the golden ratio as the scaling function would lead to obvious and expected results. Regarding claims 6 and 7, claims 6 and 7 are directed towards the method performed by claims 2 and 3, respectively. Therefore, the rejection applied to claims 2 and 3 also applies to claims 6 and 7. Regarding claims 10 and 11, claims 10 and 11 are substantially similar to claims 2 and 3, respectively. Therefore, the rejections applied to claims 2 and 3 also apply to claims 10 and 11. Claims 4, 8, and 12 are rejected under U.S.C. §103 as being unpatentable over the combination of Zhao and Neyshabur and in further view of Li (“Hierarchical Chunking of Sequential Memory on Neuromorphic Architecture with Reduced Synaptic Plasticity”, 2016). Regarding claim 4, the combination of Zhao and Neyshabur teaches The system of claim 1. However, the combination of Zhao and Neyshabur doesn't explicitly teach wherein each outgoing edge of the second node is scaled up by a factor of ϕ, wherein ϕ represents the Golden Ratio. Li, in the same field of endeavor, teaches each outgoing edge of the second node is scaled up by a factor of ϕ, wherein ϕ represents the Golden Ratio.([p. 4] "Suppose that there are N0 neurons in a chunk, and each neuron represents a particular item in the memory. Before we encode a sequence containing κ ≤ N0 metastable states in a chunk as described in Equation (2), the bias parameter of each neuron and the weight between two arbitrary neurons need be determined first. In this work, the bias parameter of neuron i in a specific chunking sequence is chosen [...] where Fk is the k-th term of the Fibonacci sequence (Dunlap, 1997) with F1 = 1 and F2 = g. Here g is the “Golden ratio” (Dunlap, 1997; Livio, 2008). The synaptic weight between neurons i and j in the same chunk is then selected as: [See Eqn. 4 and 5]"). The combination of Zhao and Neyshabur as well as Li are directed towards scaling neural network edges. Therefore, the combination of Zhao and Neyshabur as well as Li are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Zhao and Neyshabur with the teachings of Li by scaling the edge weights by the golden ratio. Li provides as additional motivation for combination ([p. 8] “It is seen that a small ϕ usually leads to failure of the encoding of the sequential memory while a larger ϕ improves such a situation. We repeated the experiments 200 times to estimate the encoding success rate for each fixed ϕ in Figure 7, where it is shown that the encoding success rate in each chunk is a monotonously increasing function of the dynamic range of synaptic weights”). Regarding claim 8, claim 8 is directed towards the method performed by the system of claim 4. Therefore, the rejection applied to claim 4 also applies to claim 8. Regarding claim 12, claim 12 is substantially similar to claim 4. Therefore, the rejection applied to claim 4 also applies to claim 12. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124 /MIRANDA M HUANG/Supervisory Patent Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Jun 27, 2022
Application Filed
Jun 21, 2025
Non-Final Rejection — §101, §103
Oct 22, 2025
Examiner Interview Summary
Oct 22, 2025
Applicant Interview (Telephonic)
Dec 04, 2025
Response Filed
Dec 22, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561604
SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12547878
Highly Efficient Convolutional Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12536426
Smooth Continuous Piecewise Constructed Activation Functions
2y 5m to grant Granted Jan 27, 2026
Patent 12518143
FEEDFORWARD GENERATIVE NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12505340
STASH BALANCING IN MODEL PARALLELISM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
90%
With Interview (+38.2%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month