Prosecution Insights
Last updated: April 19, 2026
Application No. 18/267,184

HIGH DIMENSIONAL AND ULTRAHIGH DIMENSIONAL DATA ANALYSIS WITH KERNEL NEURAL NETWORKS

Non-Final OA §101§103
Filed
Jun 14, 2023
Examiner
COVINGTON, AMANDA R
Art Unit
3686
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
UNIVERSITY OF FLORIDA RESEARCH FOUNDATION, INC.
OA Round
1 (Non-Final)
22%
Grant Probability
At Risk
1-2
OA Rounds
3y 6m
To Grant
52%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
31 granted / 140 resolved
-29.9% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
34 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
40.7%
+0.7% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more. Step 1 of the Alice/Mayo Test Claims 1- 9 are drawn to a method, which is within the four statutory categories (i.e. process). Claims 1 0 - 19 are drawn to a system, which is within the four statutory categories (i.e. apparatus). Claim 20 is drawn to a computer readable medium, which is within the four statutory categories (i.e. manufacture). Step 2A of the Alice/Mayo Test - Prong One The independent claim s recite an abstract idea. For example, claim 1 (and substantially similar with independent claim s 1 0, 20 ) recites: A method for risk prediction using high-dimensional and ultrahigh-dimensional data, comprising: training a kernel-based neural network (KNN) with a training set of data to produce a trained KNN model, the KNN model comprising a plurality of kernels as a plurality of layers to capture complexity between the data with disease phenotypes, the training set of data comprising genetic information applied as inputs to the KNN and one or more phenotypes; determining a likelihood of a condition based at least in part upon an output indication of the trained KNN corresponding to the one or more phenotypes, the output indication based upon analysis of data comprising genetic information from an individual by the trained KNN; and identifying a treatment or prevention strategy for the individual based at least in part upon the likelihood of the condition . These underlined elements recite an abstract idea that can be categorized, under its broadest reasonable interpretation, to cover the management of personal behavior or interaction (i.e., following rules or instructions) , but for the recitation of generic computer components. For example, but for the kernel based neural network, computing device, processing circuitry with memory, non-transitory computer-readable medium with executable program in a computing device, the limitations in the context of this claim encompass following rules to determine the likelihood of a patient’s condition related to their genetic information and identifying treatment for the patient for risk prevention . If a claim limitation, under its broadest reasonable interpretation, covers management of commercial interactions but for the recitation of generic computer components, then the limitations fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. See MPEP § 2106.04(a). Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claims 2- 9 and 11-19 reciting particular aspects of the abstract idea ). Step 2A of the Alice/Mayo Test - Prong Two For example, claim 1 (and substantially similar with independent claims 10, 20) recites: A method for risk prediction using high-dimensional and ultrahigh-dimensional data, comprising: training a kernel-based neural network (KNN) with a training set of data to produce a trained KNN model, the KNN model comprising a plurality of kernels as a plurality of layers to capture complexity between the data with disease phenotypes, (merely invokes use of computer and other machinery as a tool as noted below, see MPEP 2106.05(f)) the training set of data comprising genetic information applied as inputs to the KNN and one or more phenotypes; determining a likelihood of a condition based at least in part upon an output indication of the trained KNN corresponding to the one or more phenotypes, the output indication based upon analysis of data comprising genetic information from an individual by the trained KNN; an d (merely invokes use of computer and other machinery as a tool as noted below, see MPEP 2106.05(f)) identifying a treatment or prevention strategy for the individual based at least in part upon the likelihood of the condition . The judicial exception is not integrated into a practical application. In particular, the additional elements do not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more than limitations, which: amount to mere instructions to apply an exception (such as recitations of the kernel based neural network, computing device, processing circuitry with memory, non-transitory computer-readable medium with executable program in a computing device , thereby invoking computers as a tool to perform the abstract idea, see applicant’s specification [00 20 ], [0029] -[0043] , [0081], [0085], [0090], see MPEP 2106.05(f)) Dependent claims recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (such as claims 4-5, 13-14 recites additional limitations that furthers the abstract idea; claims 2- 3, 6-9, 11-12, 15-19 recite additional limitations which amount to invoking computers as a tool to perform the abstract idea, and claims 2-9 and 11-19 additional limitations which generally link the abstract idea to a particular technological environment or field of use). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and do not impose a meaningful limit to integrate the abstract idea into a practical application. Step 2B of the Alice/Mayo Test for Claims The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception. Additionally, the additional elements, other than the abstract idea per se, amount to no more than elements which: amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields (such as using the kernel based neural network, computing device, processing circuitry with memory, non-transitory computer-readable medium with executable program in a computing device, e.g., Applicant’s spec describes the computer system with it being well-understood, routine, and conventional because it describes in a manner that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such elements to satisfy 112a. (See Applicant’s Spec. [0020], [0029]-[0043], [0081], [0085], [0090 ]) ; using a neural network, computing device, processing circuitry with memory, non-transitory computer-readable medium with executable program in a computing device , e.g., merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions, Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 2358-59, 110 USPQ2d 1976, 1983-84 (2014). Dependent claims recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea and are generally linking the abstract idea to a particular field of environment. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Therefore, the claims are not patent eligible, and are rejected under 35 U.S.C. § 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim s 1-3, 5, 10-12, 14, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (WO 2015/054266) in view of Comaniciu et al. (WO 2018/192672). Regarding claim 1, Wang discloses a method for risk prediction using high-dimensional and ultrahigh-dimensional data, comprising: training a kernel-based neural network (KNN) with a training set of data to produce a trained KNN model, (the model, a neural network, using a kernel method (KNN), is trained using the training data set, and the trained model is used for model improvement by automated selection of new training system response predictors; para [0011], [0048] the KNN model comprising capturing complexity between the data with disease phenotypes, (the model, the neural network, using the kernel method (KNN model) to describe the level of impact of a node affected by drug target nodes (capturing complexity between the data) and initial predictors of abnormalities in phenotypes (disease phenotypes); para [0009], [0038], [0039]; the training set of data comprising genetic information applied as inputs to the KNN and one or more phenotypes; (the training data set includes system response vectors as predictors represented by DNA sequence variation (genetic information applied as inputs) for the neural network using the kernel method (KNN) and a plurality of phenotypic outcome pairs (one or more phenotypes); para [0009], [0011], [0012 ] ); determining a likelihood of a condition based at least in part upon an output indication of the trained KNN corresponding to the one or more phenotypes, (providing categorical prediction of side effects of a drug combination (a likelihood of a condition) using the trained neural network (based at least in part upon an output indication of the trained KNN) to output the phenotypic outcome pairs (corresponding to the phenotypes); para [0048], [0049], [0050 ] ), the output indication based upon analysis of data comprising genetic information from an individual by the trained KNN; and (the output consisting of the phenotypic output pairs (output indication) uses training information Y (based upon analysis of data) related to experiment results comprising data from a disease gene database 130 (genetic information) for a selected experiment received from a screening database (information from an individual) used by the neural network (trained KNN); para [0050 ] ); identifying a treatment or prevention strategy for the individual based at least in part upon the likelihood of the condition. (using a multi-target strategy to introduce possibilities for the treatment regimen (strategy for the individual) using the categorical prediction (based at least in part upon the likelihood of the condition); para [0024], [0048 ] ). Wang does not appear to discloses the following, however, Comaniciu teaches it is old and well known in the art of data processing to have: a plurality of kernels as a plurality of layers to capture complexity (kernels of an encoder and decoder network use four, five, or six layers (a plurality of layers) per dense block for processing of complex images (capture complexity); para [0047], [0054 ] ). Therefore, it would have been obvious to one of ordinary skill in the art of healthcare data processing, before the effective filing date of the claimed invention, to modify Wang to incorporate a plurality of kernels as a plurality of layers to capture complexity, as taught by Comaniciu, in order to gain the advantage of computing a plurality of kernels to minimize the loss when reducing dimensionality of input images (Comaniciu; para [0047]). Regarding claim 2 , Wang-Comaniciu teaches the method of claim 1, and Comaniciu further teaches wherein a first layer of the plurality of layers comprises a plurality of kernels and a last layer of the plurality of layers comprises a single kernel or a plurality of kernels. ( Comaniciu teaches the encoder network 202 having a plurality of layers including 210 as shown in figure 3 (first layer) where kernels (plurality) of the encoder network are computed, and decoder network 204 having a plurality of layers including 250 as shown in figure 3 (last layer) where kernels of the decoder network are computed; para [0047]; fig 2). Regarding claim 3 , Wang-Comaniciu teaches the method of claim 2, and Comaniciu further teaches wherein the plurality of kernels in the first layer converts a plurality of data inputs into a plurality of latent variants. ( Comaniciu teaches taking training image set 110 (data inputs) to generate latent variable values (variants); para [0047), [0048 ] ). {plurality of kernels in first layer taught above by Comaniciu } ) . Regarding claim 5, Wang-Comaniciu teaches the method of claim 2, and Comaniciu further teaches wherein individual latent variables of the plurality of kernels are generated by random sampling of outputs of the plurality of kernels. ( Comaniciu teaches one or more latent variables with latent variable values (variables of the kernels) determined by random sampling consensus/RANSAC for outlier detections (outputs of the kernels); para [0036), [0064 ] ). Regarding claim 10 , the claim recites substantially similar limitations as those already recited in the rejection of claim 1, and, as such, is rejected for similar reasons as give above. Additionally, Wang further discloses at least one computing device comprising processing circuitry including a processor and memory, the at least one computing device configured to at least (computer code executed by a processor in a computing device, such as a processor in a computer; para [0062]-[0066]); Regarding claim 11 , the claim recites substantially similar limitations as those already recited in the rejection of claim 2, and, as such, is rejected for similar reasons as give above. Regarding claim 12 , the claim recites substantially similar limitations as those already recited in the rejection of claim 3, and, as such, is rejected for similar reasons as give above. Regarding claim 14 , the claim recites substantially similar limitations as those already recited in the rejection of claim 5, and, as such, is rejected for similar reasons as give above. Regarding claim 19 , Wang-Comaniciu teaches the system of claim 10, and Wang further teaches wherein the training set of data and the trained KNN model are stored in a data store. (Wang [0027] training data from screening database 1 10 and given a selected network model 150 from network database 140). Regarding claim 20, the claim recites substantially similar limitations as those already recited in the rejection of claim 1, and, as such, is rejected for similar reasons as give above. Additionally, Wang further discloses a non-transitory computer-readable medium embodying a program executable in at least one computing device, where when executed the program causes the at least computing device to at least (computer-readable instructions in memory 320 of computing device 300, executed by processor 310; para [0062]-[0066]). Claim s 4, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Wang-Comaniciu in view of Karow et al. (WO 2019/169049). Regarding claim 4 , Wang-Comaniciu teaches the method of claim 3, but does not appear to teach the following, however, Karow teaches it is old and well known in the art of data processing wherein the plurality of data inputs comprise single- nucleotide polymorphisms (SNPs) or biomarkers. (Karow teaches multimodal analysis of a plurality of genetic features such as SNPs (data inputs); abstract; para [0014 ] ). Therefore, it would have been obvious to one of ordinary skill in the art of healthcare data processing, before the effective filing date of the claimed invention, to modify Wang-Comaniciu, as modified above, to incorporate wherein the plurality of data inputs comprise single- nucleotide polymorphisms (SNPs) or biomarkers, as taught by Karow, in order to gain the advantage of looking at specific SNPs to calculate a polygenic risk score See Karow [0018]. Regarding claim 13 , the claim recites substantially similar limitations as those already recited in the rejection of claim 4, and, as such, is rejected for similar reasons as give above. Claim s 6-9, 15-18 are rejected under 35 U.S.C. 103 as being unpatentable over Wang-Comaniciu in view of Koppe et al. (July 2020, Deep learning for small and big data in psychiatry). Regarding claim 6 , Wang-Comaniciu teaches the method of claim 2, and does not appear to explicitly teaches the following, however, Koppe teaches it is old and well known in the art of data processing wherein the single kernel or plurality of kernels of the last layer determines the output indication based upon a plurality of latent variable produced by a preceding layer of the plurality of layers. ( Koppe pg. 178 col. 1 teaches outcomes is minimized across a training set for which the true y are known (for reviews see [ 12, 15, 36]), a process in which successive hidden layers of the network tend to learn more and more abstract representations of the data (e.g., edges and corners on early layers for visual images and fully segmented object representations on deeper layers) . Therefore, it would have been obvious to one of ordinary skill in the art of healthcare data processing, before the effective filing date of the claimed invention, to modify Wang-Comaniciu, as modified above, to incorporate wherein the single kernel or plurality of kernels of the last layer determines the output indication based upon a plurality of latent variable produced by a preceding layer of the plurality of layers, as taught by Koppe, in order to increase the degree of accuracy through training with multiple layers. See Koppe pg. 179 col. 2. Regarding claim 7, Wang-Comaniciu -Koppe teaches the method of claim 6, and Koppe further teaches wherein the preceding layer is the first layer. ( Koppe pg. 178 col. 1 teaches outcomes is minimized across a training set for which the true y are known (for reviews see [ 12, 15, 36]), a process in which successive hidden layers of the network tend to learn more and more abstract representations of the data (e.g., edges and corners on early layers for visual images and fully segmented object representations on deeper layers) . The motivation to combine the references is discussed in the rejection of claim 6 and incorporated herein. Regarding claim 8 , Wang-Comaniciu teaches the method of claim 1, and does not appear to explicitly teaches the following, however, Koppe teaches it is old and well known in the art of data processing wherein the KNN is trained using minimum norm quadratic estimation. ( Koppe pg. 184 col. 2 teaches that for fully probabilistic models, meaning models which treat both the observations and latent (hidden) variables as random variables, the variational annealing approach proposes to gradually increase the ratio between observation and latent variable noise in the loss function, that is, to decrease the relative noise in the hidden variables across training iterations [ 124, 125]. The idea is that initiating the latent variable mappings with very high noise (i.e., low precision), essentially makes the optimization criterion (in the limit) a quadratic and convex function of the observations, and thus easy to solve. As the ratio is slowly increased, putting a stronger emphasis on the latent variable model fit, more and more hidden configurations inconsistent with the data slowly ‘freeze’ out. Rather than steepening the overall ‘energy landscape’ as in simulated annealing (i.e. cooling the overall temperature, or variance), this approach gradually decreases the relative temperature of the hidden variable loss) . The motivation to combine the references is discussed in the rejection of claim 6 and incorporated herein. Regarding claim 9, Wang-Comaniciu teaches the method of claim 1, and does not appear to explicitly teaches the following, however, Koppe teaches it is old and well known in the art of data processing wherein training of the KNN is accelerated using batch training. ( Koppe pg. 185 col. 1 teaches that rather than computing the gradient across the entire data set, it computes the gradient from a small subset of (randomly drawn) samples, or mini-batches, thus injecting some noise into the training process that may help to avoid local minima) . The motivation to combine the references is discussed in the rejection of claim 6 and incorporated herein. Regarding claim 15 , the claim recites substantially similar limitations as those already recited in the rejection of claim 6, and, as such, is rejected for similar reasons as give above. Regarding claim 16 , the claim recites substantially similar limitations as those already recited in the rejection of claim 7, and, as such, is rejected for similar reasons as give above. Regarding claim 17 , the claim recites substantially similar limitations as those already recited in the rejection of claim 8, and, as such, is rejected for similar reasons as give above. Regarding claim 18 , the claim recites substantially similar limitations as those already recited in the rejection of claim 9, and, as such, is rejected for similar reasons as give above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT AMANDA R COVINGTON whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (303)297-4604 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday, 10 - 5 MT . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Jason B. Dunham can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-8109 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMANDA R. COVINGTON/ Examiner, Art Unit 3686 /RACHELLE L REICHERT/ Primary Examiner, Art Unit 3686
Read full office action

Prosecution Timeline

Jun 14, 2023
Application Filed
Feb 12, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12417834
GENETICALLY PERSONALIZED INTRAVENOUS AND INTRAMUSCULAR NUTRITION THERAPY DESIGN SYSTEMS AND METHODS
2y 5m to grant Granted Sep 16, 2025
Patent 12381005
DATABASE MANAGEMENT AND GRAPHICAL USER INTERFACES FOR MEASUREMENTS COLLECTED BY ANALYZING BLOOD
2y 5m to grant Granted Aug 05, 2025
Patent 12119104
AUTOMATED CLINICAL WORKFLOW
2y 5m to grant Granted Oct 15, 2024
Patent 11961617
PATIENT CONTROLLED INTEGRATED AND COMPREHENSIVE HEALTH RECORD MANAGEMENT SYSTEM
2y 5m to grant Granted Apr 16, 2024
Patent 11915810
SYSTEM AND METHOD FOR TRANSMITTING PRESCRIPTION TO PHARMACY USING SELF-DIAGNOSTIC TEST AND TELEMEDICINE
2y 5m to grant Granted Feb 27, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
22%
Grant Probability
52%
With Interview (+29.9%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month