Detailed Action
This action is in response to the amendment filed on 11/20/2025 for application 17/855,774, in which:
Claims 1, 8, and 15 are the independent claims.
Claims 1, 3, 5, 8, 10, 12, 15, 17, and 19 have been amended.
Claims 2, 9 and 16 have been canceled.
Claims 1, 3-8, 10-15, and 17-21 are currently pending.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 05/12/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .s
Response to Arguments
Applicant's arguments filed 11/20/2025 have been fully considered but they are not persuasive.
Regarding the 35 U.S.C. § 101 Rejections:
Applicant's arguments regarding the 35 U.S.C. § 101 rejections of the previous office action have been fully considered, but are unpersuasive.
Applicant notes amending the claims (Pages 8-9) to add further details and technical improvement to overcome the 101 rejections. The applicant further supports their assertion by noting that the newly amended claims could not be performed in the human mind, specifically: sample the machine learning prediction model to obtain multiple predictions per input, compute per-sample uncertainty scores based on disagreement among the multiple predictions, and compute Average Displacement Errors (ADE) for the multiple predictions, and update hyperparameters of the machine learning prediction model with the final differentiable loss value to improve a correlation between uncertainty and prediction error of the machine learning prediction model. Applicant also notes that as Squires has noted within in Ex parte Guillaume Desjardins et al., "an improvement to how the machine learning model itself operates" integrates any abstract idea into a patent eligible practical application. Thus, the amended claims set forth an improvement to a machine learning prediction models as updating the hyperparameters of the machine learning prediction model improves a correlation as noted within the specification ([0098]). Such an improvement establishes a practical application of any abstract idea that is included in the claims. Accordingly, reconsideration of the rejections under 101 in light of the pending claims and arguments is requested.
Examiner respectfully disagrees. For the reasons given below and in the 35 U.S.C. § 101 rejections, the claims are directed to an abstract idea (Step 2A Prong 1) and do not integrate the abstract idea into a practical application (Step 2A Prong 2). The claims recites abstract ideas a-f; where the abstract ideas are mathematical relationships between variables using formulas/equations, or evaluations/judgements that can be performed in the human mind (or by a human using pen and paper). The independent claim is no more detailed than using an apparatus to perform predictions via a machine learning model which samples to obtain predictions and specific computations to update the hyperparameters of a machine learning model. The above features/limitations within the newly amended claims contain additional elements and abstract ideas but the additional elements are unable to integrate the judicial exception as sampling a machine learning model and updating hyperparameters are not abstract ideas and the limitation is directed to merely applying an abstract idea using a generic computer as a tool (see MPEP 2106.05(f)(2), 2106.04(d)). The limitation is unable to provide the alleged improvement of learning efficiency as sampling ML models and updating hyperparameters is done to accomplish the abstract idea(s). The computation claims are merely mathematical concepts as they are mathematical relationship between variables and/or numbers using a mathematical formula/equations. The limitations are unable to provide improvement as they are currently being evaluated as either abstract idea(s) or additional elements that fall within MPEP 2106.05. The claims are directed towards the improvement of an abstract idea. Improvements to an abstract idea are still considered to an abstract idea. Additionally, the Claims does not reflect any improvement in the functioning of a computer or hardware processor rather the additional elements merely use a generic computer component to perform the abstract idea or restricting the abstract idea to a particular technological environment. Therefore, the claims do not integrate the judicial exception into a practical application nor amount to significantly more. The claim is not patent eligible. Although the Claims are interpreted in light of the specification, limitations from the specification are not read into the Claims.
MPEP 2106.05(a) recites:
After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology … the claim must include the components or steps of the invention that provide the improvement described in the specification
…
It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below.
Applicant fails to show how any alleged technical improvement would be provided by anything more than the judicial exception on its own. Additionally, applicant fails to show how the claim includes components or steps that would provide the alleged improvement described in the specification. By MPEP 2106.05(f)(1), "the claim recites only the idea of a solution or outcome, i.e. the claim fails to recite details of how a solution to a problem is accomplished". Moreover, the examiner maintains that the Claim does not impose any meaningful limits on the judicial exception. As noted in the rejection, the Claim does not include additional elements that are sufficient to amount to an integration of the identified abstract idea into a practical application, thus the claim is directed to an abstract idea.
Regarding the 35 U.S.C. § 102/103 Rejections:
Applicant's arguments regarding the 35 U.S.C. § 102/103 rejections of the previous office action have been fully considered, but are unpersuasive.
Applicant traverses the 102 rejections (Page 9-10) based on Krishnan as the amended independent claims are not anticipated by Krishnan; thus, all art-based rejections are traversed. As amended, Krishnan does not teach or suggest the newly amended limitations from the independent claims. Thus, the independent claim, similar independent claims, and all dependent claims should be reconsidered for allowance.
Examiner respectfully disagrees. Krishnan does not explicitly teach all newly amended limitations. However, the rejections have been updated as necessitated by amendment utilizing Chai as an additional prior art reference. Applicant’s arguments with respect to claim(s) independent claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-8, 10-15, and 17-21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
Subject Matter Eligibility Analysis Step 1:
Claim 1 recites an apparatus, thus a machine, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
However, Claim 1 further recites the machine comprising of:
compute per-sample uncertainty scores based on disagreement among the multiple predictions (a mathematical relationship between variables and/or numbers using a mathematical formula/equations)
compute Average Displacement Errors (ADE) for the multiple predictions (a mathematical relationship between variables and/or numbers using a mathematical formula/equations)
based on the uncertainty scores and the ADEs, determine accuracy-certainty classifications for the multiple predictions … (a human being can mentally apply evaluation to determine accuracy-certainty classifications for a plurality of predictions based on specific scores & calculations)
calculate counts of samples corresponding to the accuracy-certainty classifications (a human being can mentally apply evaluation to count samples for specific classification)
calculate an Error-Aligned Uncertainty Calibration loss (EaUC) that uses bounded approximations of error and uncertainty, and class-count weightings (a mathematical relationship between variables and/or numbers using a mathematical formula/equations)
calculate a final differentiable loss value based on a weighted sum of a primary predictive loss, an ADE-based robustness, and the EaUC (a mathematical relationship between variables and/or numbers using a mathematical formula/equations)
Claim 1 thus recites an abstract idea (that falls into the “mathematical concepts” or “mental processes” group of abstract ideas).
Subject Matter Eligibility Analysis Step 2A Prong 2:
This judicial exception is not integrated into a practical application because the additional elements recited consists of:
An apparatus, comprising: a machine learning prediction model; at least one memory; instructions; and processor circuitry to at least one of execute or instantiate the instructions to: (to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f))
sample the machine learning prediction model to obtain multiple predictions per input (to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f))
wherein the accuracy-certainty classifications include accurate and certain, accurate and uncertain, inaccurate and certain, inaccurate and uncertain (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h))
update hyperparameters of the machine learning prediction model with the final differentiable loss value to improve a correlation between uncertainty and prediction error of the machine learning prediction model (to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f))
Subject Matter Eligibility Analysis Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements recited, alone or in combination, do not provide significantly more than the abstract idea itself. Additional elements a, b and d are merely applying the abstract idea on a computer (MPEP 2106.05(f)) which cannot provide significantly more. Additional element c is only restricting the abstract idea to a Particular Technological Environment (MPEP 2106.05(h)) which cannot provide significantly more. Thus, the claim is subject-matter ineligible.
Regarding Claim 3:
Subject Matter Eligibility Analysis Step 1:
Dependent Claim 3 recites the apparatus of Claim 1. Claim 1 is an apparatus, thus a machine, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
However, Claim 3 further recites the machine comprising of wherein the counts of samples corresponding to the accuracy-certainty classifications are determined using one or more of a regression or continuous structured prediction model (a mathematical relationship between variables and/or numbers using a mathematical formula/equations). Claim 3 thus recites an abstract idea (that falls into the “mathematical concepts” group of abstract ideas).
Subject Matter Eligibility Analysis Step 2A Prong 2:
This judicial exception is not integrated into a practical application because there are no new additional elements recited.
Subject Matter Eligibility Analysis Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited.
Regarding Claim 4:
Subject Matter Eligibility Analysis Step 1:
Dependent Claim 4 recites the apparatus of Claim 1. Claim 1 is an apparatus, thus a machine, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
However, Claim 4 further recites the machine comprising of wherein a standard negative log likelihood loss is calculated as a primary loss value (a mathematical relationship between variables and/or numbers using a mathematical formula/equations). Claim 4 thus recites an abstract idea (that falls into the “mathematical concepts” group of abstract ideas).
Subject Matter Eligibility Analysis Step 2A Prong 2:
This judicial exception is not integrated into a practical application because there are no new additional elements recited.
Subject Matter Eligibility Analysis Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited.
Regarding Claim 5:
Subject Matter Eligibility Analysis Step 1:
Dependent Claim 5 recites the apparatus of Claim 4. Claim 4 is an apparatus, thus a machine, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
However, Claim 5 further recites the machine comprising of wherein the standard negative log likelihood loss is added to the EaUC to calculate the final differentiable loss value (a mathematical relationship between variables and/or numbers using a mathematical formula/equations). Claim 5 thus recites an abstract idea (that falls into the “mathematical concepts” group of abstract ideas).
Subject Matter Eligibility Analysis Step 2A Prong 2:
This judicial exception is not integrated into a practical application because there are no new additional elements recited.
Subject Matter Eligibility Analysis Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited.
Regarding Claim 6:
Subject Matter Eligibility Analysis Step 1:
Dependent Claim 6 recites the apparatus of Claim 1. Claim 1 is an apparatus, thus a machine, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
However, Claim 6 further recites the machine comprising of wherein a robustness score is calculated and used to … (a mathematical relationship between variables and/or numbers using a mathematical formula/equations). Claim 6 thus recites an abstract idea (that falls into the “mathematical concepts” group of abstract ideas).
Subject Matter Eligibility Analysis Step 2A Prong 2:
This judicial exception is not integrated into a practical application because there are no new additional elements recited.
Subject Matter Eligibility Analysis Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited.
Regarding Claim 7:
Subject Matter Eligibility Analysis Step 1:
Dependent Claim 7 recites the apparatus of Claim 6. Claim 6 is an apparatus, thus a machine, one of the four statutory categories of patentable subject matter.
Subject Matter Eligibility Analysis Step 2A Prong 1:
However, Claim 7 further recites the machine comprising of wherein the robustness score is calculated using an Average Displacement Error (ADE) (a mathematical relationship between variables and/or numbers using a mathematical formula/equations). Claim 7 thus recites an abstract idea (that falls into the “mathematical concepts” group of abstract ideas).
Subject Matter Eligibility Analysis Step 2A Prong 2:
This judicial exception is not integrated into a practical application because there are no new additional elements recited.
Subject Matter Eligibility Analysis Step 2B:
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited.
Regarding Claims 8 and 10-14:
Claims 8 and 10-14 incorporate substantively all the limitations of Claims 1 and 3-7 in a non-transitory computer readable medium (thus, a manufacture) and further recites comprising instructions that, when executed, cause a machine to at least (these claim limitations appear to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f)) and does not appear to integrate the abstract idea into a particular application; thus, the claim is subject-matter ineligible as it does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself); thus, Claims 8 and 10-14 are rejected for reasons set forth in the rejections of Claims 1 and 3-7, respectively.
Regarding Claims 15 and 17-21:
Claims 15 and 17-21 incorporate substantively all the limitations of Claims 1 and 3-7 in a method (thus, a process) and further recites no new limitations; thus, Claims 15 and 17-21 are rejected for reasons set forth in the rejections of Claims 1 and 3-7, respectively.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-8, 10-15, and 17-21 are rejected under 35 U.S.C. 103 as being unpatentable over Krishnan et al., “Improving model calibration with accuracy versus uncertainty optimization”, in view of Chai et al., “MultiPath: Multiple Probabilistic Anchor Trajectory Hypotheses for Behavior Prediction”.
Regarding Claim 1:
Krishnan teaches:
An apparatus, comprising: a machine learning prediction model; at least one memory; instructions; and processor circuitry to at least one of execute or instantiate the instructions to:
(Krishnan, Page 2, Paragraph 2, “…we introduce the accuracy versus uncertainty calibration (AvUC) loss function for probabilistic deep neural networks to derive models that will be confident on accurate predictions and indicate higher uncertainty when likely to be inaccurate”; Page 9, Paragraph 2, “We have made the code (https://github.com/IntelLabs/AVUC) available to facilitate probabilistic deep learning community to evaluate and improve model calibration for various other baselines”. The method for improving uncertainty calibration in deep neural networks (DNN) uses a machine learning prediction model to make predictions with uncertainty estimates which can be replicated utilizing the source code (interpreted as instructions by the examiner) from GitHub which implies a processor, memory, and non-transitory computer program as they are inherent within an apparatus that utilizes source code for calibrating a prediction model to obtain accurate uncertainty estimates from a DNN).
sample the machine learning prediction model to obtain … predictions per input;
(Krishnan, Page 4, Paragraph 3, “For each example with input xi … In case of probabilistic models, predictive distribution is obtained from T stochastic forward passes (Monte Carlo samples) …”. For probabilistic models, predictive distributions are obtained via Monte Carlo samples to obtain predictions per input; where a predictive distribution represents a full distribution of potential predictions rather than a single point).
compute per-sample uncertainty scores …
(Krishnan, Page 3, “… grouped into four different categories: … nAC, nAU, nIC and nIU represent the number of samples in the categories AC, AU, IC and IU respectively”; Page 4, Equation 2, “
PNG
media_image1.png
78
563
media_image1.png
Greyscale
”. Equation 2 shows ui which represents the uncertainty estimates for the model predictions per sample within the accuracy-uncertainty categories; thus, computing per sample uncertainty scores).
…
based on the uncertainty scores … , determine accuracy-certainty classifications for the … predictions, wherein the accuracy-certainty classifications include accurate and certain, accurate and uncertain, inaccurate and certain, inaccurate and uncertain;
(Krishnan, Page 3, “… grouped into four different categories: … nAC, nAU, nIC and nIU represent the number of samples in the categories AC, AU, IC and IU respectively.
PNG
media_image2.png
92
211
media_image2.png
Greyscale
”; The accuracy-uncertainty table within Equation 1 denotes the classifications which contains accurate and certain (AC), inaccurate and certain samples (IC), accurate and uncertain samples (AU), and inaccurate and uncertain samples (IU)).
calculate counts of samples corresponding to the accuracy-certainty classifications;
(Krishnan, Page 3, “… grouped into four different categories: … nAC, nAU, nIC and nIU represent the number of samples in the categories AC, AU, IC and IU respectively.
PNG
media_image2.png
92
211
media_image2.png
Greyscale
”; Page 4, Equation 2, “
PNG
media_image1.png
78
563
media_image1.png
Greyscale
”. Equation 1 shows the estimation metric for AvU (accuracy versus uncertainty) which uses the equations from Equation 2 to calculate counts of the number of samples that fall into each of the four accuracy-certainty classifications (which can be seen within the accuracy uncertainty table within Equation 1)).
calculate an Error-Aligned Uncertainty Calibration loss (EaUC) that uses bounded approximations of error and uncertainty, and class-count weightings
(Krishnan, Page 4, Equation (3) and (4), Paragraph 4“ We define the AvUC loss function … The hyperbolic tangent function is used to scale the uncertainty values between 0 and 1 … with standard gradient descent optimization and enables the model to learn to provide well-calibrated uncertainties, …
PNG
media_image3.png
200
624
media_image3.png
Greyscale
”. Equations 3 and 4 show the AvUC loss function which is based on error and uncertainty; where the hyperbolic tangent function is used to bound the approximations of error for the class-count weightings (between 0 and 1 for AU/IC/AC/IU). Thus, the loss function with gradient descent (provides calibrated uncertainties) which is interpreted by the examiner as an “Error-Aligned Uncertainty Calibration loss function”).
calculate a final differentiable loss value based on a weighted sum of a primary predictive loss, … and the EaUC; and
(Krishnan, Page 4, Paragraph 1, “… We propose differentiable approximations to the AvU utility and introduce a trainable uncertainty calibration loss (LAvUC) in section 3.1, which serves as the utility-dependent penalty term within the loss-calibrated approximate inference framework described in section 3.2”; Page 5, Equation 5, “
PNG
media_image4.png
79
586
media_image4.png
Greyscale
”; Page 5, Algorithm 1: Line 19. LAvUC is within Equation 5 which is for a final differentiable loss value and can be seen within Line 19 within Algorithm 1 which contains a weighted sum primary predictive loss (AvUC loss) * the weight (Beta); where AvUC loss unweighted is the EaUC).
update hyperparameters of the machine learning prediction model with the final differentiable loss value to improve a correlation between uncertainty and prediction error of the machine learning prediction model.
(Krishnan, Page 4, Paragraph 1, “… trainable uncertainty calibration loss (LAvUC) in section 3.1, which serves as the utility-dependent penalty term within the loss-calibrated approximate inference framework described in section 3.2”; Page 5, Algorithm 1: Lines 20-21. Algorithm 1: Lines 20-21 indicate the parameters of the prediction model being updated/calibrated based on the total loss (final differentiable loss value) by applying the gradient-based optimizations for the variational parameters optimization; thus, improving a correlation between uncertainty and prediction error of the ML model as it is being optimized based on LAvUC).
Krishnan teaches sampling to obtain prediction per input and using a machine learning prediction model to perform computations but does not explicitly disclose multiple predictions per input and computing ADE.
However, Chai teaches:
sample the machine learning prediction model to obtain multiple predictions per input;
(Chai, Page 2, Figure 1; Page 6, Figure 2. Figure 1 denotes the machine learning prediction model to obtain multiple predictions (Fig 1: Inference (p = 0.1; 0.3; 0.5)) per input (Fig 1: Input scene). Figure 2 also denotes the samples drawn from the data generation (noted within the caption) ).
compute per-sample uncertainty scores based on disagreement among the multiple predictions based on disagreement among the multiple predictions;
(Chai, Page 5, Paragraph 4 & 10, “
PNG
media_image5.png
94
767
media_image5.png
Greyscale
…
PNG
media_image6.png
113
781
media_image6.png
Greyscale
… ”; Page 6, Figure 2: “(b) MultiPath with K = 3 anchors correctly learns the intent and uncertainty distributions, achieving high likelihood …”. Figure 2 (b) shows the generations of intent and uncertainty distributions for 3 anchors (trajectories) which were estimated by samples; thus each anchor trajectory (based on samples) computes an intent score and is denoted in the color graded figure. The examiner interprets the intent and uncertainty distributions within MultiPath as computing scores based on disagreements among the multiple predictions shown in Figure 2; where disagreements are interpreted as uncertainty between multiple predictions).
compute Average Displacement Errors (ADE) for the multiple predictions;
… an ADE-based robustness …
(Chai, Page 6, Figure 2, Paragraph 2, “
PNG
media_image7.png
145
772
media_image7.png
Greyscale
”. Figure 2 shows the computation for distance-based ADE (average displace errors) which is the ground truth trajectory prediction score compared to most likely within a weighted set; thus, computing for the multiple predictions. The examiner is interpreting the minADEM as the ADE-based robustness calculation as shown in Table 1 for MulitPath).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize Krishnan’s apparatus that utilizes source code for calibrating a prediction model to obtain accurate uncertainty estimates for classification with the MultiPath system within Chai’s documentation which explicitly notes multiple predictions per output and specific calculation of using an Average Displacement Error for evaluations. One having ordinary skill in the art would have been motivated to implement this change before the effective filing date of the claimed invention, as this leads to increasing log likelihood, efficiency, decreasing error, analytics, and more (Chai, Page 7, Table 1; Abstract, “We present MultiPath, which leverages a fixed set of future state-sequence anchors that correspond to modes of the trajectory distribution. At inference, our model predicts a discrete distribution over the anchors and, for each anchor, regresses offsets from anchor waypoints along with uncertainties, yielding a Gaussian mixture at each time step. Our model is efficient, requiring only one forward inference pass to obtain multi-modal future distributions, and the output is parametric, allowing compact communication and analytical probabilistic queries. We show on several datasets that our model achieves more accurate predictions, and compared to sampling baselines, does so with an order of magnitude fewer trajectories”. Table 1 shows different methods and analysis being conducted to show the increasing of log likelihood and decreasing error via utilizing ADE for multipath examples (multiple predictions per input)).
Regarding Claim 3:
Krishnan/Chai teach the apparatus of Claim 1 and Krishnan further teaches:
wherein the counts of samples corresponding to the accuracy-certainty classifications are determined using one or more of a regression … structured prediction model.
(Krishnan, Page 3, “
PNG
media_image2.png
92
211
media_image2.png
Greyscale
A reliable and well-calibrated model will provide higher AvU … we expect the model to be certain about its predictions when it is accurate and provide high uncertainty estimates when making inaccurate predictions”; Page 4, Notations, “
PNG
media_image8.png
286
619
media_image8.png
Greyscale
”.Equation 2 shows the accuracy-uncertainty classifications (which can be seen within the accuracy uncertainty table within Equation 1) being determined using the regression structured prediction model’s parameters (as the output is a continuous value via uncertainty error and the prediction model describes a relationship between input and output based on a predictive distribution; thus a regression structured prediction model)).
Regarding Claim 4:
Krishnan/Chai teach the apparatus of Claim 1 and Krishnan further teaches:
wherein a standard negative log likelihood loss is calculated as a primary loss value.
(Krishnan, Page 4, Paragraph 1, “… We propose differentiable approximations to the AvU utility and introduce a trainable uncertainty calibration loss (LAvUC) in section 3.1, which serves as the utility-dependent penalty term within the loss-calibrated approximate inference framework described in section 3.2”; Page 5, Equation 5, “
PNG
media_image9.png
73
579
media_image9.png
Greyscale
”. Equation 5 contains an expected negative log likelihood which contains the standard log likelihood expression log p(y|x,w)).
Regarding Claim 5:
Krishnan/Chai teach the apparatus of Claim 4 and Krishnan further teaches:
wherein the standard negative log likelihood loss is added to the EaUC to calculate the final differentiable loss value.
(Krishnan, Page 4, Paragraph 1, “… We propose differentiable approximations to the AvU utility and introduce a trainable uncertainty calibration loss (LAvUC) in section 3.1, which serves as the utility-dependent penalty term within the loss-calibrated approximate inference framework described in section 3.2”; Page 5, Equation 5, “
PNG
media_image10.png
77
590
media_image10.png
Greyscale
”; Page 5, Algorithm 1: Line 19. Algorithm 1: Line 19 and Equation 5 (total loss) shows calculating the final differentiable loss value by adding the standard negative log likelihood (from ELBO loss) to the EaUC (LAvUC); thus, to calculate the final differentiable loss value (total loss)).
Regarding Claim 6:
Krishnan/Chai teach the apparatus of Claim 1 and Krishnan further teaches:
wherein a robustness score is calculated and used to calibrate the prediction model with the final differentiable loss value
(Krishnan, Page 3, Paragraph 5, “… We propose differential approximations to the accuracy versus uncertainty (AvU) defined in Equation 1 to be used as utility function, which can be computed for a mini-batch of data samples while training the model”; Page 6, Paragraph 1, “… When uncertainty estimates are not accurate, AvU [Wingdings font/0xE0] 0 and LAvUC [Wingdings font/0xE0] ∞ guiding the gradient computation exert AvUC loss towards 0, which will happen when AvU score is pushed higher (AvU [Wingdings font/0xE0] 1), enabling the model to maximize the utility to provide well-calibrated uncertainties … we show how AvUC loss and ELBO loss vary during training and the impact of AvUC regularization term on loss-calibrated ELBO (total loss) and actual AvU score”. The AvU (accuracy versus uncertainty) score is interpreted by the examiner as a robustness score (which is interpreted by the examiner as a quantifiable score indicating model performance) as it indicates how well the prediction model’s uncertainty estimates are based on accuracy. The AvU score enables the model to maximize the calibration of the prediction model; thus, used to calibrate the prediction model).
Regarding Claim 7:
Krishnan/Chai teach the apparatus of Claim 6 and Krishnan further teaches:
wherein the robustness score is calculated using an Average Displacement Error (ADE).
However, Chai does explicitly teach:
wherein the robustness score is calculated using an Average Displacement Error (ADE).
(Chai, Page 6, Figure 2, Paragraph 2, “
PNG
media_image7.png
145
772
media_image7.png
Greyscale
”. Figure 2 shows the computation for distance-based ADE (average displace errors) which is the ground truth trajectory prediction score compared to most likely within a weighted set. The examiner is interpreting the minADEM as the ADE-based robustness calculation as shown in Table 1 for MulitPath).
The motivation of Claim 1’s combination of Krishnan and Chai is still maintained.
Regarding Claims 8 and 10-14:
Claims 8 and 10-14 incorporate substantively all the limitations of Claims 1 and 3-7 in a non-transitory computer readable medium and further recites comprising instructions that, when executed, cause a machine to at least (Krishnan, Page 9, Paragraph 2, “We have made the code (https://github.com/IntelLabs/AVUC) available to facilitate probabilistic deep learning community to evaluate and improve model calibration for various other baselines”. The source code (interpreted by the examiner as the instructions for a method) from GitHub implies a processor, memory, and non-transitory computer program as they are inherent within an apparatus that utilizes source code for calibrating a prediction model to obtain accurate uncertainty estimates from a DNN); thus, Claims 8 and 10-14 are rejected for reasons set forth in the rejections of Claims 1 and 3-7, respectively.
Regarding Claims 15 and 17-21:
Claims 15 and 17-21 incorporate substantively all the limitations of Claims 1 and 3-7 in a method (Krishnan, Page 9, Paragraph 2, “We have made the code (https://github.com/IntelLabs/AVUC) available to facilitate probabilistic deep learning community to evaluate and improve model calibration for various other baselines”. The source code (interpreted by the examiner as the instructions) from GitHub implies a processor, memory, and non-transitory computer program as they are inherent within an apparatus that utilizes source code to implement a method of calibrating a prediction model to obtain accurate uncertainty estimates from a DNN); and further recites no new limitations; thus, Claims 15 and 17-21 are rejected for reasons set forth in the rejections of Claims 1 and 3-7, respectively.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM RAHMAN whose telephone number is (703)756-1646. The examiner can normally be reached M-F 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/I.R./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122