Prosecution Insights
Last updated: April 19, 2026
Application No. 18/216,194

Confronting Domain Shift in Trained Neural Networks

Non-Final OA §101§102
Filed
Jun 29, 2023
Examiner
PHAKOUSONH, DARAVANH
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
National Technology and Engineering Solutions of Sandia, LLC
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-5.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1-20 are within the four statutory categories (a process, machine, manufacture or composition of matter). Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Claims 1-7 are directed to a method consisting of a series of steps, meaning that it is directed to the statutory category of process. Claims 8-20 are directed to storage mediums and processors which are machines. Regarding claim 1, the following claim elements are abstract ideas: generating…a number of predictions based on time series data from a second domain, wherein a random subset of nodes in the neural network is dropped out for each prediction to generate a prediction distribution (This is an abstract idea of a mental process. The limitation involves producing multiple predictive estimates for the same time-series input and observing variation among other estimates. A person could review historical time-series values and generate several predicted outcomes by repeatedly performing the same calculation while selectively including or excluding certain contributing factors (e.g., omitting some variables in one estimate and including them in another). Each resulting predicted value could be recorded, such as in a table or spreadsheet, and the collection of values organized to form a distribution of possible outcomes. This process of repeated estimation, recording numerical results, and organizing them for comparison relies on observation, judgement, and basic mathematical calculations that can be practically performed in the human mind with the aid of simple computational tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, the limitation falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); calculating an uncertainty value for each prediction (This is an abstract idea of a mental process and mathematical concept. The limitation recites performing mathematical calculations to quantify variability among numerical prediction values, such as computing differences, averages, or measures of dispersion (e.g., variance or standard deviation). A person could review a set of prediction values, perform basic arithmetic to determine how much the values differ from one another, and record the result as an uncertainty value. Such calculations involve evaluation, comparison, and numerical computation that can be performed in the human mind, optionally with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process and mathematical concept groupings of abstract ideas. See MPEP 2106.04(a)(2)(I) and 2106.04(a)(2)(III).); determination that the uncertainty value for a prediction exceeds a specified threshold (This is an abstract idea of a mental process. The limitation involves observing a numerical uncertainty value, comparing it to a predefined threshold, and making a judgement on whether the value exceeds that threshold. A person could review a calculated value, compare it against a set limit, and conclude whether the condition is satisfied. Such observation, comparison, and judgement can be practically performed in the human mind with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process grouping of abstract ideas.), to incorporate a corrective factor according to expectations based on domain knowledge (This is an abstract idea of a mental process and mathematical concept. The limitation involves applying judgement based on a domain knowledge to determine how a numerical prediction value should be adjusted and performing mathematical modification of that value. For example, a person could review multiple predicted values, derive a representative numerical value such as an average or a measure of variation, and adjust the prediction accordingly. Such evaluation and numerical adjustment involve observation, judgement, and mathematical calculation that can practically be performed in the human mind, optionally with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process and mathematical concept grouping of abstract ideas.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: training a neural network with time series training data from a first domain over a number of training iterations, wherein a random subset of nodes in the neural network is dropped out during each training iteration (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).); responsive to…updating the prediction (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).) Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, claim 2 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the time series data from the second domain includes a discontinuity in sequential data that does not exist in the time series data from the first domain (This limitation constitutes insignificant extra-solution activity, it merely specifies a characteristic of the input data to which the abstract idea is applied without adding a meaningful limitation.). Regarding claim 3, the rejection of claim 1 is incorporated herein. Further, claim 3 recites the following abstract ideas: replacing the prediction with the mean of the prediction distribution (This is an abstract idea of a mathematical concept. The limitation involves calculating an average value from a set of numerical predictions and substituting that calculated value for the original prediction. Such averaging and numerical substitution constitute mathematical calculations and therefore falls within the mathematical concept of abstract ideas.) Regarding claim 4, the rejection of claim 1 is incorporated herein. Further, claim 4 recites the following abstract ideas: adding the standard deviation of the prediction distribution in the direction of distribution skew (This is an abstract idea of a mental process and mathematical concept. The limitation involves mathematical reasoning to compute a standard deviation from a set of numerical prediction values, identify the direction of the skew of the distribution, and add the computed value to a prediction accordingly. Such statistical calculation and numerical adjustment constitute mathematical operations that can be performed in the human mind with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, the limitation falls within the mental process and mathematical concept groupings of abstract ideas.). Regarding claim 5, the rejection of claim 1 is incorporated herein. Further, claim 5 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein node dropout is applied to all layers of the neural network (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 6, the rejection of claim 1 is incorporated herein. Further, claim 6 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein node dropout is applied only to a decoder portion of the neural network (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).). Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the neural network comprises one of: a recurrent neural network; a transformer; a Long Short Term Memory network; a convolutional neural network; a multilayer perceptron; a spiking neural network; or a deep belief network (This limitation constitutes insignificant extra-solution activity, as it merely specifies the type of neural network used to implement the abstract idea without adding a meaningful limitation.). Regarding claim 8, the following claim elements are abstract ideas: generate…a number of predictions based on time series data from a second domain, wherein a random subset of nodes in the neural network is dropped out for each prediction to generate a prediction distribution (This is an abstract idea of a mental process. The limitation involves producing multiple predictive estimates for the same time-series input and observing variation among other estimates. A person could review historical time-series values and generate several predicted outcomes by repeatedly performing the same calculation while selectively including or excluding certain contributing factors (e.g., omitting some variables in one estimate and including them in another). Each resulting predicted value could be recorded, such as in a table or spreadsheet, and the collection of values organized to form a distribution of possible outcomes. This process of repeated estimation, recording numerical results, and organizing them for comparison relies on observation, judgement, and basic mathematical calculations that can be practically performed in the human mind with the aid of simple computational tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, the limitation falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); calculate an uncertainty value for each prediction (This is an abstract idea of a mental process and mathematical concept. The limitation recites performing mathematical calculations to quantify variability among numerical prediction values, such as computing differences, averages, or measures of dispersion (e.g., variance or standard deviation). A person could review a set of prediction values, perform basic arithmetic to determine how much the values differ from one another, and record the result as an uncertainty value. Such calculations involve evaluation, comparison, and numerical computation that can be performed in the human mind, optionally with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process and mathematical concept groupings of abstract ideas. See MPEP 2106.04(a)(2)(I) and 2106.04(a)(2)(III).); determination that the uncertainty value for a prediction exceeds a specified threshold (This is an abstract idea of a mental process. The limitation involves observing a numerical uncertainty value, comparing it to a predefined threshold, and making a judgement on whether the value exceeds that threshold. A person could review a calculated value, compare it against a set limit, and conclude whether the condition is satisfied. Such observation, comparison, and judgement can be practically performed in the human mind with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process grouping of abstract ideas.), to incorporate a corrective factor according to expectations based on domain knowledge (This is an abstract idea of a mental process and mathematical concept. The limitation involves applying judgement based on a domain knowledge to determine how a numerical prediction value should be adjusted and performing mathematical modification of that value. For example, a person could review multiple predicted values, derive a representative numerical value such as an average or a measure of variation, and adjust the prediction accordingly. Such evaluation and numerical adjustment involve observation, judgement, and mathematical calculation that can practically be performed in the human mind, optionally with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process and mathematical concept grouping of abstract ideas.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: a storage device that stores program instructions (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).); one or more processors (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) train a neural network with time series training data from a first domain over a number of training iterations, wherein a random subset of nodes in the neural network is dropped out during each training iteration (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).); responsive to…updating the prediction (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).) Regarding claim 9, the rejection of claim 8 is incorporated herein. The claim recites similar limitations corresponding to claim 2. Therefore, the same subject matter analysis that was utilized for claim 2, as described above, is equally applicable to claim 9. Therefore, claim 9 is ineligible. Regarding claim 10, the rejection of claim 8 is incorporated herein. The claim recites similar limitations corresponding to claim 3. Therefore, the same subject matter analysis that was utilized for claim 3, as described above, is equally applicable to claim 10. Therefore, claim 10 is ineligible. Regarding claim 11, the rejection of claim 8 is incorporated herein. The claim recites similar limitations corresponding to claim 4. Therefore, the same subject matter analysis that was utilized for claim 4, as described above, is equally applicable to claim 11. Therefore, claim 11 is ineligible. Regarding claim 12, the rejection of claim 8 is incorporated herein. The claim recites similar limitations corresponding to claim 5. Therefore, the same subject matter analysis that was utilized for claim 5, as described above, is equally applicable to claim 12. Therefore, claim 12 is ineligible. Regarding claim 13, the rejection of claim 8 is incorporated herein. The claim recites similar limitations corresponding to claim 6. Therefore, the same subject matter analysis that was utilized for claim 6, as described above, is equally applicable to claim 13. Therefore, claim 13 is ineligible. Regarding claim 14, the rejection of claim 8 is incorporated herein. The claim recites similar limitations corresponding to claim 7. Therefore, the same subject matter analysis that was utilized for claim 7, as described above, is equally applicable to claim 14. Therefore, claim 14 is ineligible. Regarding claim 15, the following claim elements are abstract ideas: generating…a number of predictions based on time series data from a second domain, wherein a random subset of nodes in the neural network is dropped out for each prediction to generate a prediction distribution (This is an abstract idea of a mental process. The limitation involves producing multiple predictive estimates for the same time-series input and observing variation among other estimates. A person could review historical time-series values and generate several predicted outcomes by repeatedly performing the same calculation while selectively including or excluding certain contributing factors (e.g., omitting some variables in one estimate and including them in another). Each resulting predicted value could be recorded, such as in a table or spreadsheet, and the collection of values organized to form a distribution of possible outcomes. This process of repeated estimation, recording numerical results, and organizing them for comparison relies on observation, judgement, and basic mathematical calculations that can be practically performed in the human mind with the aid of simple computational tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, the limitation falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); calculating an uncertainty value for each prediction (This is an abstract idea of a mental process and mathematical concept. The limitation recites performing mathematical calculations to quantify variability among numerical prediction values, such as computing differences, averages, or measures of dispersion (e.g., variance or standard deviation). A person could review a set of prediction values, perform basic arithmetic to determine how much the values differ from one another, and record the result as an uncertainty value. Such calculations involve evaluation, comparison, and numerical computation that can be performed in the human mind, optionally with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process and mathematical concept groupings of abstract ideas. See MPEP 2106.04(a)(2)(I) and 2106.04(a)(2)(III).); determination that the uncertainty value for a prediction exceeds a specified threshold (This is an abstract idea of a mental process. The limitation involves observing a numerical uncertainty value, comparing it to a predefined threshold, and making a judgement on whether the value exceeds that threshold. A person could review a calculated value, compare it against a set limit, and conclude whether the condition is satisfied. Such observation, comparison, and judgement can be practically performed in the human mind with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process grouping of abstract ideas.), to incorporate a corrective factor according to expectations based on domain knowledge (This is an abstract idea of a mental process and mathematical concept. The limitation involves applying judgement based on a domain knowledge to determine how a numerical prediction value should be adjusted and performing mathematical modification of that value. For example, a person could review multiple predicted values, derive a representative numerical value such as an average or a measure of variation, and adjust the prediction accordingly. Such evaluation and numerical adjustment involve observation, judgement, and mathematical calculation that can practically be performed in the human mind, optionally with the aid of basic tools such as pen and paper, a calculator, or a spreadsheet. Accordingly, this limitation falls within the mental process and mathematical concept grouping of abstract ideas.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: computer program product (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).); a computer-readable storage medium having program instructions (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) training a neural network with time series training data from a first domain over a number of training iterations, wherein a random subset of nodes in the neural network is dropped out during each training iteration (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).); responsive to…updating the prediction (This limitation constitutes mere instructions to apply the abstract idea and insignificant extra-solution activity. See MPEP 2106.05(f) and 2106.05(g).) Regarding claim 16, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 2. Therefore, the same subject matter analysis that was utilized for claim 2, as described above, is equally applicable to claim 16. Therefore, claim 16 is ineligible. Regarding claim 17, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 3. Therefore, the same subject matter analysis that was utilized for claim 3, as described above, is equally applicable to claim 17. Therefore, claim 17 is ineligible. Regarding claim 18, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 4. Therefore, the same subject matter analysis that was utilized for claim 4, as described above, is equally applicable to claim 18. Therefore, claim 18 is ineligible. Regarding claim 19, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 5. Therefore, the same subject matter analysis that was utilized for claim 5, as described above, is equally applicable to claim 19. Therefore, claim 19 is ineligible. Regarding claim 20, the rejection of claim 15 is incorporated herein. The claim recites similar limitations corresponding to claim 6. Therefore, the same subject matter analysis that was utilized for claim 6, as described above, is equally applicable to claim 20. Therefore, claim 20 is ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Martinez at al., (NPL: “Confronting Domain Shift in Trained Neural Networks” (Published: 2020)). Regarding claim 1, Martinez discloses: A computer-implemented method for neural network prediction correction, the method comprising: training a neural network with time series training data from a first domain over a number of training iterations, wherein a random subset of nodes in the neural network is dropped out during each training iteration (Martinez, [section 4.2] “We will implement both a WaveNet with a stack of dilations of size [1,2,4,8] and a receptive field of length 128 as in [27] and a Transformer with the base model architecture as presented in [2], each of which have seen success in predicting sequential data. For WaveNet, we will apply dropout to all convolutional layers. For Transformer, we will apply dropout only to the decoder portion of the network,” – sequential data corresponds to time series data under the broadest reasonable interpretation. Neural network layers comprise multiple computational nodes. Applying dropout to layers during training results in a subset of nodes being dropped during training iterations. Dropout inherently operates by randomly disabling units during training. Accordingly, Martinez discloses training a neural network over a number of training iterations in which a random subset of nodes is dropped out.); generating, by the trained neural network, a number of predictions based on time series data from a second domain, wherein a random subset of nodes in the neural network is dropped out for each prediction to generate a prediction distribution (Martinez, [section 3] “When a NN is trained to mimic time series data, it learns a mapping from patterns observed in previous timesteps to the next data point in the time series… we infer several predictions for f at time t with different subsets of neuron outputs dropped from the calculation, resulting in a distribution of predicted output values at each time step.” [section 4.2] “We will then apply our trained DL model on the experimental structure data, where the output with the corrective factor will be used to predict the next timestep of the displacements in the real structure.” – although the reference does not expressly use the phrase “second domain,” it teaches applying the trained neural network to experimental data structure that differs from the data used to train the model. Under BRI, data that differs from the training data corresponds to a second domain relative to the training domain. Accordingly, the reference teaches generating predictions based on time series data from a second domain.); calculating an uncertainty value for each prediction (Martinez, [section 3] “Our method assumes that a NN with dropout layers used to quantify the uncertainty in its predictions is trained to approximate a real-valued function f ( x ,   t ) . Input to the model is a sequence of values of f   over a series of previous timesteps along with the value of x at time   t , and output is the value of f over a sequence of subsequent timesteps.”); responsive to determination that the uncertainty value for a prediction exceeds a specified threshold, updating the prediction to incorporate a corrective factor according to expectations based on domain knowledge (Martinez, [section 3] “When the model’s uncertainty exceeds a threshold value, instead of returning the model’s nominal prediction for   f at time t , our method updates the prediction to incorporate information from the calculated uncertainty to improve accuracy… Rather than leaving the uncertainty estimation as a simple indication of the model’s confidence at time t , our method actively uses statistical properties of the distribution to serve as a corrective factor for the prediction of f   at time t .” – the reference teaches updating the prediction when uncertainty exceeds a threshold and applying a corrective factor based on statistical properties of the prediction distribution. Under BRI, these statistical properties reflect learned expectations associated with the training domain and therefore correspond to domain knowledge.). Regarding claim 2, Martinez discloses: The method of claim 1, wherein the time series data from the second domain includes a discontinuity in sequential data that does not exist in the time series data from the first domain (Martinez, [section 4.1] “We will first investigate our method’s performance on a toy problem consisting of data drawn from simulations of a mass-spring system with one mass element and a fixed stiffness with varying initial conditions, and loaded under a known force. We will also generate simulated data where the stiffness of the spring abruptly changes. The data will consist of a time series of the force on the mass as well as the displacement of the mass”- the reference distinguishes between training data generated under fixed stiffness conditions and separate simulated data in which stiffness abruptly changes. The abrupt stiffness change introduces a discontinuity in the time series data that does not exist in the fixed-stiffness training data.). Regarding claim 3, Martinez discloses: The method of claim 1, wherein updating the prediction comprises replacing the prediction with the mean of the prediction distribution (Martinez, [section 3] “We will explore two corrective methods in this work: 1) We replace the nominal prediction with the mean of the prediction distribution”). Regarding claim 4, Martinez discloses: The method of claim 1, wherein updating the prediction comprises adding the standard deviation of the prediction distribution in the direction of distribution skew (Martinez, [section 3] “We will explore two corrective factors… 2) the addition of the standard deviation in the direction of the skew of the prediction distribution.”). Regarding claim 5, Martinez discloses: The method of claim 1, wherein node dropout is applied to all layers of the neural network (Martinez, [section 4.2] “For WaveNet, we will apply dropout to all convolutional layers.”). Regarding claim 6, Martinez discloses: The method of claim 1, wherein node dropout is applied only to a decoder portion of the neural network (Martinez, [section 4.2] “For Transformer, we will apply dropout only to the decoder portion of the network”). Regarding claim 7, Martinez discloses: The method of claim 1, wherein the neural network comprises one of: a recurrent neural network; a transformer; a Long Short Term Memory network; a convolutional neural network; a multilayer perceptron; a spiking neural network; or a deep belief network (Martinez, [Introduction] “Techniques such as Transformers [2] and Long Short Term Memory (LSTM) [3] models have been applied to sequential data”). Regarding claim 8, Martinez discloses: A system for neural network prediction correction, the system comprising: a storage device that stores program instructions; one or more processors operably connected to the storage device and configured to execute the program instructions to cause the system to (Martinez, [Introduction] “Techniques to improve deep learning (DL) model performance on targets that have shifted from the training domain have been proposed” – the reference discloses computer-implemented deep learning techniques for neural network prediction correction. Execution of such techniques inherently requires program instructions stored in a storage device and one or more processors operably connected to the storage device and configured to execute the program instructions, as recited.): train a neural network with time series training data from a first domain over a number of training iterations, wherein a random subset of nodes in the neural network is dropped out during each training iteration (Martinez, [section 4.2] “We will implement both a WaveNet with a stack of dilations of size [1,2,4,8] and a receptive field of length 128 as in [27] and a Transformer with the base model architecture as presented in [2], each of which have seen success in predicting sequential data. For WaveNet, we will apply dropout to all convolutional layers. For Transformer, we will apply dropout only to the decoder portion of the network,” – sequential data corresponds to time series data under the broadest reasonable interpretation. Neural network layers comprise multiple computational nodes. Applying dropout to layers during training results in a subset of nodes being dropped during training iterations. Dropout inherently operates by randomly disabling units during training. Accordingly, Martinez discloses training a neural network over a number of training iterations in which a random subset of nodes is dropped out.); generate, by the trained neural network, a number of predictions based on time series data from a second domain, wherein a random subset of nodes in the neural network is dropped out for each prediction to generate a prediction distribution (Martinez, [section 3] “When a NN is trained to mimic time series data, it learns a mapping from patterns observed in previous timesteps to the next data point in the time series… we infer several predictions for f at time t with different subsets of neuron outputs dropped from the calculation, resulting in a distribution of predicted output values at each time step.” [section 4.2] “We will then apply our trained DL model on the experimental structure data, where the output with the corrective factor will be used to predict the next timestep of the displacements in the real structure.” – although the reference does not expressly use the phrase “second domain,” it teaches applying the trained neural network to experimental data structure that differs from the data used to train the model. Under BRI, data that differs from the training data corresponds to a second domain relative to the training domain. Accordingly, the reference teaches generating predictions based on time series data from a second domain.); calculate an uncertainty value for each prediction (Martinez, [section 3] “Our method assumes that a NN with dropout layers used to quantify the uncertainty in its predictions is trained to approximate a real-valued function f ( x ,   t ) . Input to the model is a sequence of values of f   over a series of previous timesteps along with the value of x at time   t , and output is the value of f over a sequence of subsequent timesteps.”); responsive to determination that the uncertainty value for a prediction exceeds a specified threshold, updating the prediction to incorporate a corrective factor according to expectations based on domain knowledge (Martinez, [section 3] “When the model’s uncertainty exceeds a threshold value, instead of returning the model’s nominal prediction for   f at time t , our method updates the prediction to incorporate information from the calculated uncertainty to improve accuracy… Rather than leaving the uncertainty estimation as a simple indication of the model’s confidence at time t , our method actively uses statistical properties of the distribution to serve as a corrective factor for the prediction of f   at time t .” – the reference teaches updating the prediction when uncertainty exceeds a threshold and applying a corrective factor based on statistical properties of the prediction distribution. Under BRI, these statistical properties reflect learned expectations associated with the training domain and therefore correspond to domain knowledge.). Regarding claim 9, Martinez teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. The claim recites similar limitations corresponding to claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Regarding claim 10, Martinez teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. The claim recites similar limitations corresponding to claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding claim 11, Martinez teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. The claim recites similar limitations corresponding to claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding claim 12, Martinez teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. The claim recites similar limitations corresponding to claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding claim 13, Martinez teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. The claim recites similar limitations corresponding to claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Regarding claim 14, Martinez teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8. The claim recites similar limitations corresponding to claim 7 and is rejected for similar reasons as claim 7 using similar teachings and rationale. Regarding claim 15, Martinez discloses: A computer program product for neural network prediction correction, the computer program product comprising: a computer-readable storage medium having program instructions embodied thereon to perform the steps of (Martinez, [Introduction] “Techniques to improve deep learning (DL) model performance on targets that have shifted from the training domain have been proposed” – the reference discloses computer-implemented techniques for neural network prediction correction. Implementation of such techniques inherently require program instructions embodied on a computer-readable medium to perform the disclosed steps, as recited.): training a neural network with time series training data from a first domain over a number of training iterations, wherein a random subset of nodes in the neural network is dropped out during each training iteration (Martinez, [section 4.2] “We will implement both a WaveNet with a stack of dilations of size [1,2,4,8] and a receptive field of length 128 as in [27] and a Transformer with the base model architecture as presented in [2], each of which have seen success in predicting sequential data. For WaveNet, we will apply dropout to all convolutional layers. For Transformer, we will apply dropout only to the decoder portion of the network,” – sequential data corresponds to time series data under the broadest reasonable interpretation. Neural network layers comprise multiple computational nodes. Applying dropout to layers during training results in a subset of nodes being dropped during training iterations. Dropout inherently operates by randomly disabling units during training. Accordingly, Martinez discloses training a neural network over a number of training iterations in which a random subset of nodes is dropped out.); generating, by the trained neural network, a number of predictions based on time series data from a second domain, wherein a random subset of nodes in the neural network is dropped out for each prediction to generate a prediction distribution (Martinez, [section 3] “When a NN is trained to mimic time series data, it learns a mapping from patterns observed in previous timesteps to the next data point in the time series… we infer several predictions for f at time t with different subsets of neuron outputs dropped from the calculation, resulting in a distribution of predicted output values at each time step.” [section 4.2] “We will then apply our trained DL model on the experimental structure data, where the output with the corrective factor will be used to predict the next timestep of the displacements in the real structure.” – although the reference does not expressly use the phrase “second domain,” it teaches applying the trained neural network to experimental data structure that differs from the data used to train the model. Under BRI, data that differs from the training data corresponds to a second domain relative to the training domain. Accordingly, the reference teaches generating predictions based on time series data from a second domain.); calculating an uncertainty value for each prediction (Martinez, [section 3] “Our method assumes that a NN with dropout layers used to quantify the uncertainty in its predictions is trained to approximate a real-valued function f ( x ,   t ) . Input to the model is a sequence of values of f   over a series of previous timesteps along with the value of x at time   t , and output is the value of f over a sequence of subsequent timesteps.”); responsive to determination that the uncertainty value for a prediction exceeds a specified threshold, updating the prediction to incorporate a corrective factor according to expectations based on domain knowledge (Martinez, [section 3] “When the model’s uncertainty exceeds a threshold value, instead of returning the model’s nominal prediction for   f at time t , our method updates the prediction to incorporate information from the calculated uncertainty to improve accuracy… Rather than leaving the uncertainty estimation as a simple indication of the model’s confidence at time t , our method actively uses statistical properties of the distribution to serve as a corrective factor for the prediction of f   at time t .” – the reference teaches updating the prediction when uncertainty exceeds a threshold and applying a corrective factor based on statistical properties of the prediction distribution. Under BRI, these statistical properties reflect learned expectations associated with the training domain and therefore correspond to domain knowledge.). Regarding claim 16, Martinez teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Regarding claim 17, Martinez teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding claim 18, Martinez teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding claim 19, Martinez teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding claim 20, Martinez teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15. The claim recites similar limitations corresponding to claim 6 and is rejected for similar reasons as claim 6 using similar teachings and rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daravanh Phakousonh whose telephone number is (571)272-6324. The examiner can normally be reached Mon - Thurs 7 AM - 5 PM, Every other Friday 7 AM - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 571-272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daravanh Phakousonh/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Jan 29, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572821
ACCURACY PRIOR AND DIVERSITY PRIOR BASED FUTURE PREDICTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month