Prosecution Insights
Last updated: April 19, 2026
Application No. 17/564,870

NEURAL ODE-BASED CONDITIONAL TABULAR GENERATIVE ADVERSARIAL NETWORK APPARATUS AND METHOD

Non-Final OA §101§112
Filed
Dec 29, 2021
Examiner
CAMPOS, ALFREDO
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
UIF (University Industry Foundation), Yonsei University
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+28.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
26 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
33.3%
-6.7% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
3.9%
-36.1% vs TC avg
§112
20.0%
-20.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 11/26/2025 have been fully considered but they are not persuasive. Regarding 101 arguments on page 8-10 “On pages 6-8, the Office Action alleges that the claims are directed to "a mathematical process ... and a mental process ... [do not] recite additional elements that integrate the judicial exception into a particular application ... [and do] not include additional elements that are sufficient to amount to significantly more than the judicial exception ... For example, the claims recite the combination of additional elements of "perform[ing] feature extraction of the received sample and generate a plurality of continuous trajectories h(t), t1, t2, ... , tm through Ordinary Differential Equations (ODE) on the feature-extracted sample, ti being trained for i=1, 2, ... , m for all i using a gradient … Thus, the claim as a whole integrates the alleged judicial exception into a practical application.” The applicant argues how the amended limitations overcome the 101 rejection. However the argument is not persuasive and the amended limitations have not been examined. See the updated 101 rejection. Claim Rejections - 35 USC § 112 (a) The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1-4, 9-10, and 14 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claim 1 and analogous claim 9, the limitation t i being trained for i = 1, 2, …, m for i using a gradient definition from an adjoin sensitivity method, m being a hyperparameter in a corresponding model" lack written description. All dependent claims inherit the issue based on the explanation below. The specification regarding the limitations of t i being trained for i = 1, 2, …, m for i using a gradient definition from an adjoin sensitivity method, m being a hyperparameter in a corresponding model". The specification recites in page 19 line 4-16" PNG media_image1.png 643 695 media_image1.png Greyscale " The specification does not provide enough information to determine what is referred to when m maybe a hyperparameter of the corresponding model. Also the specification states that the gradient definition may be used to train ti for all i. The limitations mentioned above are interpreted as training at ti for all i. All dependent claims inherit the issue. Claim Rejections - 35 USC § 112 (b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-4, 9-10, and 14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1 and 9, the limitation " t i being trained for i = 1, 2, …, m for i using a gradient definition from an adjoin sensitivity method, m being a hyperparameter in a corresponding model" renders the claim indefinite because the scope of the claim is unascertainable, therefore, indefinite. The limitations states that t i is trained for all i to m for all i. The limitation iterates to m . The specification teaches in page 19 line 11-13 that “gradient definition (derived from the adjoint sensitivity method)may be used to train t i for all i”. Also in page 19 line 8-9 the specification recites “Accordingly, to discretize the trajectory of h(t), t 1 ,   t 2 ,   … ,   t m maybe trained and m may be a hyperparameter in the corresponding model”, thus based on the claim it is unclear what a corresponding model would be. According to the specification the model is trained at t i for all i. The claim is being examined based on the NODE-based generator training at t i   f o r   a l l   i . All dependent claims inherit the issue. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 9, 10, and 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. The claim(s) recite(s) significantly more. The subject matter eligibility test for products and process is describe below for claim 1 in view of dependent claims. Regarding claim 1: Step 1: Is the claim to a process machine manufacture or composition of matter? Claim 1 recites an apparatus, which is a system that falls under the statutory categories. Step 2A Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes – The claim recites the following: “by a Neural Ordinary Differential Equation (NODE)-based obtaining a condition vector from a condition distribution based on the preprocessed tabular data and a noisy vector from a Gaussian distribution based on the preprocessed tabular data, merging [[a]] the condition vector and [[a]] the noisy vector” - The limitation recites a mathematical process of getting a noisy vector from a Gaussian distribution (see MPEP 2106.04(a)(2). “and performing ” The limitation recites a mathematical process of homeomorphic mapping (see MPEP 2106.04(a)(2) and a mental process of generating fake samples (see MPEP 2106.04(a)(2)III). “perform feature extraction of the received sample and generate a plurality of continuous trajectories h(t), t1, 2, ... , tm through Ordinary Differential Equations (ODE) on the feature-extracted sample,” The claim limitation is directed towards mathematical process of ODE to generate trajectories (see MPEP 2106.04(a)(2)). Step 2 Prong 2: Does the claim recite additional elements that integrate the judicial exception into a particular application? No – The claim includes the additional elements are : “A Neural ODE-based Conditional Tabular Generative Adversarial Network (OCT-GAN) apparatus, comprising:” The additional element falls under apply it as it uses machine learning architecture (see MPEP 2106.05(f)). “ti being trained for i=1, 2, ..., m for all i using a gradient definition derived from an adjoint sensitivity method, m being a hyperparameter in a corresponding model, ti being a time point at which two trajectories at ti are dissimilar to each other and two trajectories from to to tm are similar to each other, and ti being trained by swapping a hidden vector of one of the two trajectories at ti with a hidden vector of another of the two trajectories at tm;” The additional elements fall under “apply it” as using a generic computer to train. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)). “receive a sample composed of [[a]] the real sample or the fake sample of the preprocessed tabular data;” The additional elements fall under Insignificant Extra- Solution Activity as mere data gathering by receiving a sample. See MPEP 2106.5(g). “preprocess tabular data composed of a discrete column and a continuous column;” The additional elements fall under “apply it” as using a generic computer to preprocess tabular data. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)). “and generate a merged trajectory hx by merging the plurality of continuous trajectories, and classify the sample as real or fake through the merged trajectory.” The additional elements fall under “apply it” as using a generic computer to generate a merge trajectory. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No - The claim does not include additional elements that are sufficient to amount to a significantly more than the judicial exemption. As an order whole, the claim is directed to a mathematical process of adversarial learning. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements fall under data gathering and apply it and do not limit the claim. The method does not improve on the function of a computer, transforms an article into another article, nor is it applied by a particular machine, making the claim not patent eligible. (Examiner Notes: The claim could potentially include significantly more if the process for training at ti is further explained as mentioned in page 19 line 4-21.) Regarding claim 2: Step 2A Prong 2, Step 2B: The additional element(s): “wherein the circuitry is further configured to transform discrete values in the discrete column into a one-hot vector and preprocess continuous values in the continuous column with mode-specific normalization.” The additional element falls under “apply it” by transforming discrete values into one-hot vector and processing continuous column with mode-specific normalization (see MPEP 2106.05(f)). The judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application. Regarding claim 3: Step 2A Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes – The claim recites the following: “wherein the circuitry is further configured to generate a normalized value and a mode value by applying a Gaussian mixture to each of the continuous values and normalizing the same with a corresponding standard deviation.” The claim limitation recites a mathematical concept of Gaussian mixture and normalizing with standard deviation (see MPEP 2106.04(a)(2)). Step 2A Prong 2, Step 2B: No additional elements are mentioned in the claim. These judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application. Regarding claim 4: Step 2A Prong 2, Step 2B: The claim includes the additional element(s): “wherein the circuitry is further configured to transform raw data in the preprocessed tabular data into mode- based information by merging the one-hot vector, the normalized value, and the mode value.” The additional element falls under “apply it” as the tabular data preprocessing unit is used to transform the data (see MPEP 2106.05(f)). No additional elements are mentioned in the claim. These judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application. Regarding claim 14: Step 2A Prong 2, Step 2B: The claim includes the additional element(s): “wherein h(t), h(t), ..., h(tm) share a same parameter θ f constituting a single system of Ordinary Differential Equations separated for a purpose of discretization, and wherein the circuitry is configured to use an entire trajectory including h(t), h(t), ..., h(tm) for classification.” The additional elements fall under “apply it” as using a generic computer to use the entire trajectory for classification. See Mere Instructions to Apply an Exemption (see MPEP 2106.05(f)). No additional elements are mentioned in the claim. These judicial exemptions do not integrate into a practical application nor provide an improvement. The process does not provide an inventive concept nor provides a practical application. Claims 9 and 10 recites a method and are analogues to the system claims 1-4 and 14. Therefore, the rejections of claims 1-4 and 14 above apply equally to claims 9 and 10. Allowable Subject Matter Claims 1 and analogous 9 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101, 112 (a) 112(b) or 35 U.S.C. 112 (pre-AIA ), 1st and 2nd paragraph, set forth in this Office action. Regarding claim 1 and analogous claim 9 are allowable over the prior art of record that teaches the limitations as noted in the previous office Action dated 9/3/2025. In particular the closest prior arts cited fail to teach“ PNG media_image2.png 319 527 media_image2.png Greyscale ”. Habiba et al. ("ECG Synthesis with Neural ODE and GAN Models", International Conference on Electrical, Computer and Energy Technologies (ICECET), Cape Town, South Africa, 2021, pp. 1-6, (2021)) ("Habiba") is the closet prior art and it teaches “Habiba Page 2, In this paper, we focus on using NODE to reduce the limitation for GAN and other deep learning models for ECG data generation. NODE [3; 12] model provides better results for a continuous-time series generation as it considers the hidden dynamics of training data as a continuous function of time instead of several layers. In addition, an ODE solver can parametrise the hidden state, as shown in Eq. (1). d h t d t = O D E S o l v e r f , h t , t , δ t   (1) Here f is a Neural Network, h(0) is the initial condition of system, the ODESolver computes the derivative of the output h(t) at time t. This approach of NODE provides faster training time than residual network with constant memory cost instead of linearly increasing memory cost and simpler design for model. In addition, if the ODESolver in NODE can leverage the architecture of RNN, it can learn continuous time series in real-time with higher precision [11; 12]. Eq. (2) shows that NODE based on RNN cell [12] can be used as ODESolver to compute the output y_0 at time t from the initial input y_0 at time t_0. Page 3-4 , C. GAN Model with NODE based Generator and Discriminator Para 1-2, For this ODE-GAN-2 model, we designed both Generator and Discriminator using NODE models. The ODEECGGenerator model described in section III-A is the Generator for this GAN model. The Discriminator for this model is a NeuralCDE Network [14] which uses a Neural controlled differential equations (CDE) [14] as the ODESolver function. The Discriminator leverages the concept of the NeuralCDE network to learn the difference between a real ECG signal, and the generated ECG signalNeuralCDE Network converts the data to a conditional continuous path X using interpolation, and this path X is passed through ODESolver to solve the ODE derived from path X to lean the hidden dynamics of the ODE system. Fig. 7 shows that the input time series for this Discriminator has two-channel, e.g. for real ECG signal as well as generated ECG signal. Fig. 7 shows that the discriminator takes real ECG signal xt and generated ECG signal yt as input. Both signals xt and yt are CDE. Interpolation between xt and yt generates an intermediate continuous path U. NeuralCDE based Discriminator learns the hidden dynamics (Z) of U to distinguish real signal and generated signal correctly. Table I describes the parameter used in proposed NeuralCDE based Discriminator. Page 4, PNG media_image3.png 257 360 media_image3.png Greyscale )”. Also Ricky T. Q. Chen, Yulia Rubanova , Jesse Bettencourt, David Duven "Neural Ordinary Differential Equations" (2018) (“Chen”) teaches “Chen page 2 2 Reverse-mode automatic differentiation of ODE solutions para 1-2 We treat the ODE solver as a black box, and compute gradients using the adjoint sensitivity method (Pontryagin et al., 1962). This approach computes gradients by solving a second, augmented ODE backwards in time, and is applicable to all ODE solvers. This approach scales linearly with problem size, has low memory cost, and explicitly controls numerical error. Consider optimizing a scalar-valued loss function L(), whose input is the result of an ODE solver: PNG media_image4.png 56 781 media_image4.png Greyscale PNG media_image5.png 559 422 media_image5.png Greyscale ”.However in particular the prior arts cited fails to teach “and ti being trained by swapping a hidden vector of one of the two trajectories at ti with a hidden vector of another of the two trajectories at tm” as claim is not found in the prior art and/or would not be obvious to one of ordinary skill in the art absent impressible hindsight. For the same reason the remaining dependent claims would be allowable. Claims 2-4, 9-10, and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. (Examiner notes that the lack of further details and explanation in the specification and thedrawings prevent Examiner from having a proper understanding of the disclosed invention. As explained above. Accordingly, Examiner is unable to have a clear claim interpretation and a proper understanding of the disclosed invention which prevent Examiner from applying a proper prior art rejection.) Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yulia Rubanova, Ricky T. Q. Chen, David Duvenaud “Latent ODEs for Irregularly-Sampled Time Series” (2019) – teaches an Ordinary Differential Equation recurrent neural networks (ODE-RNN) training using ODE. Chongli Qin, Yan Wu, Jost Tobias Springenberg, Andrew Brock, Jeff Donahue, Timothy P. Lillicrap, Pushmeet Kohli “Training Generative Adversarial Networks by Solving Ordinary Differential Equations” (2020) – teaches applying standard ODE solvers to GAN training. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALFREDO CAMPOS whose telephone number is (571)272-4504. The examiner can normally be reached 7:00 - 4:00 pm M - F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALFREDO CAMPOS/Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Dec 29, 2021
Application Filed
Apr 29, 2025
Non-Final Rejection — §101, §112
Jul 08, 2025
Interview Requested
Jul 16, 2025
Examiner Interview Summary
Jul 16, 2025
Applicant Interview (Telephonic)
Jul 29, 2025
Response Filed
Aug 29, 2025
Final Rejection — §101, §112
Nov 05, 2025
Response after Non-Final Action
Nov 26, 2025
Request for Continued Examination
Dec 07, 2025
Response after Non-Final Action
Dec 19, 2025
Non-Final Rejection — §101, §112
Mar 23, 2026
Interview Requested
Mar 31, 2026
Examiner Interview Summary
Mar 31, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561407
ONE-PASS APPROACH TO AUTOMATED TIMESERIES FORECASTING
2y 5m to grant Granted Feb 24, 2026
Patent 12561559
Neural Network Training Method and Apparatus, Electronic Device, Medium and Program Product
2y 5m to grant Granted Feb 24, 2026
Patent 12554973
HIERARCHICAL DATA LABELING FOR MACHINE LEARNING USING SEMI-SUPERVISED MULTI-LEVEL LABELING FRAMEWORK
2y 5m to grant Granted Feb 17, 2026
Patent 12536260
SYSTEM, APPARATUS, AND METHOD FOR AUTOMATICALLY GENERATING NEGATIVE KEYSTROKE EXAMPLES AND TRAINING USER IDENTIFICATION MODELS BASED ON KEYSTROKE DYNAMICS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month