DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims’ subject matter eligibility will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”).
With respect to claim 1.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1 recites an apparatus, which is a machine.
Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations identified below each, under its broadest reasonable interpretation, covers mental processes abstract idea grouping (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)), see MPEP 2106.04(a)(2), subsection III and the 2019 PEG, but for the recitation of generic computer components:
“generate output data using a neural network model provided input data; determine, based on the output data, a maximum loss value among calculated loss values that correspond to neighboring data within a reference distance from the input data; and detect, based on the maximum loss value and a threshold, whether the input data is out-of-distribution (OOD) data that is different from in-distribution data corresponding to training data used in a training of the neural network model.;”: (Mental processes or mathematical concept- calculating loss and determining whether it is out-of-distribution).
Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application.
“one or more processors configured to execute instructions; and one or more memories storing the instructions”: mere instructions to “apply it” because it only includes high level of generality description to apply the abstract idea. See MPEP § 2106.05(f).
The generic computer components in these steps are recited at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—there are no additional limitations beyond the mental processes identified above. The limitation treated above, are directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory. See MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). It also includes limitations that Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The additional element is insignificant application, which is similar to examples of activities that the courts have found to be insignificant extra-solution activity, in accordance with MPEP 2106.05(g), Insignificant Extra-Solution Activity. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 2.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “wherein, for the determining of the maximum loss, the one or more processors are further configured to determine the maximum loss value among the loss values using a cross entropy loss function”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim 3.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “wherein, for the detection, the one or more processors are further configured to, when the maximum loss value is greater than or equal to the threshold, determine the input data to be the OOD data.”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim 4.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “wherein, for the detection, the one or more processors are further configured to, when the maximum loss value is less than the threshold, determine the input data to be the in-distribution data”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: The claim recites that “obtaining a set of the potential queries;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim 5.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “wherein the input data comprises an image, the neural network model comprises an image classification model, and the in-distribution data corresponds to plural image data of each of plural classes the image classification model was trained to classify.”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim 6.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “generate converted data by adding noise to the input data; and determine, based on the output data, the maximum loss value among loss values corresponding to neighboring data within a reference distance from the converted data.”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim 7.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “determine a gradient value by differentiating a loss function, used to calculate the loss values, with respect to the converted data; generate, based on the gradient value and the converted data, updated converted data using gradient descent; and determine, based on the output data and the loss function, the maximum loss value among loss values corresponding to neighboring data within a reference distance from the updated converted data.”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claims 8-14
Step 1: The claims recite a method.
Step 2A Prong 1: The claims 8-14 recite the same mental processes as claims 1-7, respectively.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. As before, the mere recitation that the method is to be performed on a generic computer amounts to a mere instruction to apply the exception on the computer. See MPEP § 2106.05(f). With that exception, the analysis mirrors that of claims 1-7, respectively.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The analysis, with the one exception noted above, mirrors that of claims 1-7, respectively.
Claims 15
Step 1: The claims recite a non-transitory computer-readable storage medium; therefore, they fall into the statutory category of machines.
Step 2A Prong 1: The claim 15 recite the same mental processes as claim 1.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. Claim 15 recite generic computer components, namely “non-transitory computer-readable storage medium”. As before, the mere recitation that the method is to be performed on a generic computer amounts to a mere instruction to apply the exception on the computer. See MPEP § 2106.05(f). With that exception, the analysis mirrors that of claim 1.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The analysis, with the one exception noted above, mirrors that of claim 1.
With respect to claim 16.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 1 recites an apparatus, which is a machine.
Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes—the limitations identified below each, under its broadest reasonable interpretation, covers mental processes abstract idea grouping (concepts performed in the human mind (including an observation, evaluation, judgment, opinion)), see MPEP 2106.04(a)(2), subsection III and the 2019 PEG, but for the recitation of generic computer components:
“generate respective output values obtained from the neural network model provided the input data and provided neighboring data; generate, based on the generated respective output values and a loss function, a flatness value within a reference distance from the input data, and detect, based on the flatness and a threshold, whether the input data is out-of-distribution (OOD) data”: (Mental processes or mathematical concept- calculating loss and determining whether it is out-of-distribution).
Step 2A, prong two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No—the judicial exception is not integrated into a practical application.
“one or more processors configured to execute instructions; and one or more memories storing the instructions”: mere instructions to “apply it” because it only includes high level of generality description to apply the abstract idea. See MPEP § 2106.05(f).
The generic computer components in these steps are recited at a high-level of generality (i.e., as a generic computer component performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No—there are no additional limitations beyond the mental processes identified above. The limitation treated above, are directed to the well-understood, routine, and conventional activity of storing and retrieving information in memory. See MPEP § 2106.05(d)(II); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). It also includes limitations that Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The additional element is insignificant application, which is similar to examples of activities that the courts have found to be insignificant extra-solution activity, in accordance with MPEP 2106.05(g), Insignificant Extra-Solution Activity. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept.
Thus, considering the additional elements individually and in combination and the claims as a whole, the additional elements do not provide significantly more than the abstract idea. This claim is not patent eligible.
Claim 17.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “wherein the neighboring data is generated by adding noise to the input data”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim 18.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “wherein the one or more processors are further configured to determine the input data is forged or altered biometric data based on the input data being detected to be the OOD data.”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim 19.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “wherein the one or more processors are further configured to determine, based on the flatness value, whether the input data is forged or altered biometric data”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: The claim recites that “obtaining a set of the potential queries;” involves the mere gathering of data, which is insignificant extra-solution activity. See MPEP § 2106.05(g). This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim 20.
Step 1: A apparatus, as above.
Step 2A Prong 1: The claim recites that “wherein the flatness value corresponds to a maximum loss value among loss values generated using the loss function and the respective output values.”: This limitation merely recites more mathematical concepts for calculations.
Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-5, 8-12, 15-16 and 18-20 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Sun et al. (“Out-of-Distribution Detection with Deep Nearest Neighbors”, Proceedings of the 39th International Conference on Machine Learning, PMLR 162:20827-20840, 2022.).
Regarding claim 1.
Sun discloses a computing apparatus comprising: one or more processors configured to execute instructions; and one or more memories storing the instructions, wherein, the execution of the instructions by the one or more processors configures the one or more processors to: generate output data using a neural network model provided input data (see page 3, “Distance-based methods leverage feature embeddings extracted from a model and operate under the assumption that the test OOD samples are relatively far away from the ID data.”, also see Algorithm 1 OOD Detection with Deep Nearest Neighbors, shows output data using NN provided input);
determine, based on the output data, a maximum loss value among calculated loss values that correspond to neighboring data within a reference distance from the input data (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”);
and detect, based on the maximum loss value and a threshold, whether the input data is out-of-distribution (OOD) data that is different from in-distribution data corresponding to training data used in a training of the neural network model (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”).
Regarding claim 2.
Sun discloses the computing apparatus of claim 1,
Sun further discloses wherein, for the determining of the maximum loss, the one or more processors are further configured to determine the maximum loss value among the loss values using a cross entropy loss function (see page 4, “Training losses In our experiments, we aim to show that KNN-based OOD detection is agnostic to the training procedure, and is compatible with models trained under different losses. We consider two types of loss functions, with and without contrastive learning respectively.”).
Regarding claim 3.
Sun discloses the computing apparatus of claim 1,
Sun further discloses wherein, for the detection, the one or more processors are further configured to, when the maximum loss value is greater than or equal to the threshold, determine the input data to be the OOD data (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”).
Regarding claim 4.
Sun discloses the computing apparatus of claim 1,
Sun further discloses wherein, for the detection, the one or more processors are further configured to, when the maximum loss value is less than the threshold, determine the input data to be the in-distribution data (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”).
Regarding claim 5.
Sun discloses the computing apparatus of claim 1,
Sun further discloses wherein the input data comprises an image, the neural network model comprises an image classification model, and the in-distribution data corresponds to plural image data of each of plural classes the image classification model was trained to classify (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”).
Claims 8-12 recites a method to perform the apparatus recited in claims 1-5. Therefore the rejection of claims 1-5 above applies equally here.
Claim 15 recites a non-transitory computer-readable storage medium to perform the apparatus recited in claim 1. Therefore the rejection of claim 1 above applies equally here.
Regarding claim 16.
Sun discloses a computing apparatus comprising: one or more processors configured to execute instructions; and one or more memories storing the instructions, wherein, the execution of the instructions by the one or more processors configures the one or more processors to: generate respective output values obtained from the neural network model provided the input data and provided neighboring data (see page 3, “Distance-based methods leverage feature embeddings extracted from a model and operate under the assumption that the test OOD samples are relatively far away from the ID data.”, also see Algorithm 1 OOD Detection with Deep Nearest Neighbors, shows output data using NN provided input);
generate, based on the generated respective output values and a loss function, a flatness value within a reference distance from the input data, and detect, based on the flatness and a threshold, whether the input data is out-of-distribution (OOD) data (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”, i.e. wherein flatness value corresponds to maximum loss).
Regarding claim 18.
Sun discloses the computing apparatus of claim 16,
Sun further discloses wherein the one or more processors are further configured to determine the input data is forged or altered biometric data based on the input data being detected to be the OOD data (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”).
Regarding claim 19.
Sun discloses the computing apparatus of claim 18,
Sun further discloses wherein the one or more processors are further configured to determine, based on the flatness value, whether the input data is forged or altered biometric data (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”).
Regarding claim 20.
Sun discloses the computing apparatus of claim 16,
Sun further discloses wherein the flatness value corresponds to a maximum loss value among loss values generated using the loss function and the respective output values (see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6, 13 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (“Out-of-Distribution Detection with Deep Nearest Neighbors”, Proceedings of the 39th International Conference on Machine Learning, PMLR 162:20827-20840, 2022.) in view of Goodfellow et al. (“EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES”, 20 Mar 2015, google inc.)
Regarding claim 6.
Sun discloses the computing apparatus of claim 1,
Sun further discloses wherein, for the determining of the maximum loss, the one or more processors are further configured to:… maximum loss value among loss values corresponding to neighboring data within a reference distance ((see page 3, “we compute the k-th nearest neighbor distance between the embedding of each test image and the training set, and use a simple threshold-based criterion to determine if an input is OOD or not.”).
Sun do not teach generate converted data by adding noise to the input data.
Goodfellow teaches generate converted data by adding noise to the input data (see page 3, “Let θ be the parameters of a model, x the input to the model, y the targets associated with x (for machine learning tasks that have targets) and J(θ,x,y) be the cost used to train the neural network. We can linearize the cost function around the current value of θ, obtaining an optimal max-norm constrained pertubation of η = sign(∇xJ(θ,x,y)). Werefer to this as the “fast gradient sign method” of generating adversarial examples. Note that the required gradient can be computed efficiently using backpropagation”, i.e. add perturbation (noise) to the input).
Both Sun and Goodfellow pertain to the problem of data distributing, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Sun and Goodfellow to teach the above limitations. The motivation for doing so would be “—inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed in put results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks’ vulnerability to ad versarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Us ing this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.” (see Goodfellow 2015).
Claim 14 recites a method to perform the apparatus recited in claim 6. Therefore the rejection of claim 6 above applies equally here.
Regarding claim 17.
Sun discloses the computing apparatus of claim 16,
Sun do not teach limitation of claim 17.
Goodfellow teaches discloses wherein the neighboring data is generated by adding noise to the input data (see page 3, “Let θ be the parameters of a model, x the input to the model, y the targets associated with x (for machine learning tasks that have targets) and J(θ,x,y) be the cost used to train the neural network. We can linearize the cost function around the current value of θ, obtaining an optimal max-norm constrained pertubation of η = sign(∇xJ(θ,x,y)). Werefer to this as the “fast gradient sign method” of generating adversarial examples. Note that the required gradient can be computed efficiently using backpropagation”, i.e. add perturbation (noise) to the input).
Both Sun and Goodfellow pertain to the problem of data distributing, thus being analogous. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to combine Sun and Goodfellow to teach the above limitations. The motivation for doing so would be “—inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed in put results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks’ vulnerability to ad versarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Us ing this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.” (see Goodfellow 2015).
Allowable Subject Matter
Claims 7 and 14 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 101 abstract idea rejection, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Related prior arts:
RANJAN et al. (US 20190303754 A1) teaches ace discrimination systems may benefit from techniques for providing increased accuracy. For example, certain discriminative face verification systems can benefit from L.sub.2-constrained softmax loss. A method can include applying an image of a face as an input to a deep convolutional neural network. The method can also include applying an output of a fully connected layer of the deep convolutional neural network to an L.sub.2-normalizing layer. The method can further include determining softmax loss based on an output of the L.sub.2-normalizing layer.
Ramaiah et al. (US 20210216828 A1) teaches supervised proposal learning losses based on the one or more proposal features and corresponding proposal feature predictions. One or more consistency-based proposal learning losses are generated based on noisy proposal feature predictions and the corresponding proposal predictions without noise. A combined loss is generated using the one or more self-supervised proposal learning losses and one or more consistency-based proposal learning losses. The neural network is updated based on the combined loss.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IMAD M KASSIM whose telephone number is (571)272-2958. The examiner can normally be reached 10:30AM-5:30PM, M-F (E.S.T.).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297 - 4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IMAD KASSIM/Primary Examiner, Art Unit 2129