Prosecution Insights
Last updated: April 19, 2026
Application No. 18/177,189

COMPUTER-READABLE RECORDING MEDIUM STORING DETERMINATION PROGRAM, APPARATUS, AND METHOD

Non-Final OA §101§103§112
Filed
Mar 02, 2023
Examiner
SIPPEL, MOLLY CLARKE
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
7 granted / 14 resolved
-5.0% vs TC avg
Strong +58% interview lift
Without
With
+58.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
25 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
33.8%
-6.2% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
23.6%
-16.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This action is responsive to the application filed on 03/02/2023. Claims 1-12 are pending in the case. Claims 1, 5, and 9 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on 03/08/2022. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/02/2023 is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the claim recites “…a change in a classification standard of the classification model based on the loss is a predetermined standard or more before and after re-training…” on lines 8-10. The grammatical structure of the limitation results in a lack of clarity. The intended interpretation of the limitation is unclear. For examination purposes, this limitation has been interpreted to mean “…a change in a classification standard, from before re-training to after re-training, of the classification model based on the loss is a predetermined standard or more”. Claims 2-4 are rejected as being dependent upon a rejected base claim without curing any of the deficiencies. Regarding claim 5, the claim recites “…a change in a classification standard of the classification model based on the loss is a predetermined standard or more before and after re-training…” on lines 8-10. The grammatical structure of the limitation results in a lack of clarity. The intended interpretation of the limitation is unclear. For examination purposes, this limitation has been interpreted to mean “…a change in a classification standard, from before re-training to after re-training, of the classification model based on the loss is a predetermined standard or more”. Claims 6-8 are rejected as being dependent upon a rejected base claim without curing any of the deficiencies. Regarding claim 9, the claim recites “…a change in a classification standard of the classification model based on the loss is a predetermined standard or more before and after re-training…” on lines 6-8. The grammatical structure of the limitation results in a lack of clarity. The intended interpretation of the limitation is unclear. For examination purposes, this limitation has been interpreted to mean “…a change in a classification standard, from before re-training to after re-training, of the classification model based on the loss is a predetermined standard or more”. Claims 10-12 are rejected as being dependent upon a rejected base claim without curing any of the deficiencies. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1 Statutory Category: Claim 1 is directed to a machine, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial Exception: Claim 1 recites, in part, “classifies input data into any one of a plurality of classes by using a loss calculatable based on a second data set that is different from the first data set”. This limitation, under the broadest reasonable interpretation, covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “determining, in a case where a change in a classification standard of the classification model based on the loss is a predetermined standard or more before and after re-training, that unknown data that is not classified into any one of the plurality of classes is included in the second data set”. This limitation, under the broadest reasonable interpretation, covers the recitation of the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion), in this case an observation. See MPEP § 2106.04(a)(2)(III). Step 2A Prong 2 Integration into a Practical Application: This judicial exception is not integrated into a practical application. In particular the claim recites: “a non-transitory computer-readable recording medium”, “a determination program”, and “a computer”. These limitations are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Further, the claim recites: “re-training a classification model that has been trained by using a first data set”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Step 2B Significantly more: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements: “a non-transitory computer-readable recording medium”, “a determination program”, “a computer”, and “re-training a classification model that has been trained by using a first data set” are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 2, the rejection of claim 1 is incorporated, and further, the claim recites: “wherein the classification standard is a weight that specifies a determination plane that indicates a boundary of each class in the classification model”. This limitation is a continuation of the “determining, in a case where a change in a classification standard of the classification model based on the loss is a predetermined standard or more before and after re-training, that unknown data that is not classified into any one of the plurality of classes is included in the second data set” limitation identified as an abstract idea in the rejection of the parent claim, thus the claim recites a judicial exception. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 3, the rejection of claim 1 is incorporated, and further, the claim recites: “in the processing of re-training, a classification result of each piece of data included in the second data set by the classification model before re-training is set as a correct answer”. This limitation recites mental processes in addition to those identified in the rejection of the parent claim. Further, the claim recites: “re-training of the classification model is executed by using, as the loss, an error between the classification result of each piece of data included in the second data set by the classification model after re-training and the correct answer”. This limitation is a continuation of the “classifies input data into any one of a plurality of classes by using a loss calculatable based on a second data set that is different from the first data set” limitation identified as an abstract idea in the rejection of the parent claim. The claim does not include any additional elements that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 4, the rejection of claim 1 is incorporated, and further, the claim recites: “re-training of the classification model is executed by using, as the loss, an error between each piece of data included in the second data set and data restored by the restorer”. This limitation is a continuation of the “classifies input data into any one of a plurality of classes by using a loss calculatable based on a second data set that is different from the first data set” limitation identified as an abstract idea in the rejection of the parent claim. Further, the claim recites: “in the process of re-training, a restorer that restores each piece of data included in the second data set is trained from an output or an intermediate output when each piece of data included in the second data set is input to the classification model before re-training”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 5: Step 1 Statutory Category: Claim 5 is directed to a machine, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial Exception: Claim 5 recites, in part, “classifies input data into any one of a plurality of classes by using a loss calculatable based on a second data set that is different from the first data set”. This limitation, under the broadest reasonable interpretation, covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “determine, in a case where a change in a classification standard of the classification model based on the loss is a predetermined standard or more before and after re-training, that unknown data that is not classified into any one of the plurality of classes is included in the second data set”. This limitation, under the broadest reasonable interpretation, covers the recitation of the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion), in this case an observation. See MPEP § 2106.04(a)(2)(III). Step 2A Prong 2 Integration into a Practical Application: This judicial exception is not integrated into a practical application. In particular the claim recites: “an information processing apparatus”, “a memory”, and “a processor coupled to the memory”. These limitations are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Further, the claim recites: “re-train a classification model that has been trained by using a first data set”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Step 2B Significantly more: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements: “an information processing apparatus”, “a memory”, “a processor coupled to the memory”, and “re-train a classification model that has been trained by using a first data set” are additional elements that amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 6, the rejection of claim 5 is incorporated, and further, claim 6 is substantially similar to claim 2 respectively, and is rejected in the same manner and reasoning applying. Regarding claim 7, the rejection of claim 5 is incorporated, and further, claim 7 is substantially similar to claim 3 respectively, and is rejected in the same manner and reasoning applying. Regarding claim 8, the rejection of claim 5 is incorporated, and further, claim 8 is substantially similar to claim 4 respectively, and is rejected in the same manner and reasoning applying. Regarding claim 9: Step 1 Statutory Category: Claim 9 is directed to a method, which falls under one of the four statutory categories. Step 2A Prong 1 Judicial Exception: Claim 9 recites, in part, “classifies input data into any one of a plurality of classes by using a loss calculatable based on a second data set that is different from the first data set”. This limitation, under the broadest reasonable interpretation, covers the recitation of a mathematical calculation, as directed to “a claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number”. See MPEP §2106.04(a)(2)(I)(C). Further, the claim recites: “determining, in a case where a change in a classification standard of the classification model based on the loss is a predetermined standard or more before and after re-training, that unknown data that is not classified into any one of the plurality of classes is included in the second data set”. This limitation, under the broadest reasonable interpretation, covers the recitation of the abstract idea of a mental process that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper (including an observation, evaluation, judgment, opinion), in this case an observation. See MPEP § 2106.04(a)(2)(III). Step 2A Prong 2 Integration into a Practical Application: This judicial exception is not integrated into a practical application. In particular the claim recites: “re-training a classification model that has been trained by using a first data set”. This limitation is an additional element that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. See MPEP §2106.05(f). Step 2B Significantly more: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element: “re-training a classification model that has been trained by using a first data set” that amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process. Elements that merely amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer in its ordinary capacity as a tool to perform an existing process cannot provide an inventive concept. The claim is not patent eligible. Regarding claim 10, the rejection of claim 9 is incorporated, and further, claim 10 is substantially similar to claims 2 and 6 respectively, and is rejected in the same manner and reasoning applying. Regarding claim 11, the rejection of claim 9 is incorporated, and further, claim 11 is substantially similar to claims 3 and 7 respectively, and is rejected in the same manner and reasoning applying. Regarding claim 12, the rejection of claim 9 is incorporated, and further, claim 12 is substantially similar to claims 4 and 8 respectively, and is rejected in the same manner and reasoning applying. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-6, 8-10, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over G. Kwon, M. Prabhushankar, D. Temel, and G. AIRegib, Backpropagated Gradient Representations for Anomaly Detection, In Proceedings of the European Conference on Computer Vision (ECCV), 08/23/2020, https://arxiv.org/pdf/2007.09507, hereinafter referred to as “Kwon” in view of Kanishima et al., U.S. Patent Application Publication No. 20230004863, hereinafter referred to as “Kanishima”. Regarding claim 1, Kwon teaches A non-transitory computer-readable recording medium storing a determination program for causing a computer (Kwon, Page 14, Computational Efficiency of GradCon, Lines 2-5, “To show the computational efficiency of GradCon, we measure the average inference time per image using a machine with two GTX Titan X GPUs and compare computation time”) to execute processing comprising: re-training a classification model that has been trained by using a first data set and that classifies input data into any one of a plurality of classes by using a loss calculatable based on a second data set that is different from the first data set (Kwon, Page 9, Section 5.1, Lines 5-7, “In abnormal class detection, images from one class of a dataset are considered as inliers and used for the training. Images from other classes are considered as outliers”; Kwon, Page 5, Section 3.1, Paragraph 2, Lines 2-3, “The autoencoder is trained to accurately reconstruct training images and the reconstructed training images form a manifold”; Kwon, Page 6, Lines 2-3, “During testing, any given input to the autoencoder is projected onto the reconstructed image manifold”; Kwon, Page 6, Lines 8-9, “Since the abnormal image has not been utilized for training, it will be poorly reconstructed”; The “training images” are considered to be the “first data set” and the data set used during testing that contains “abnormal image[s]” is considered to be the “second data set” ; Kwon, Page 5, Section 3.1, Lines 5-6, and Equation 1, “The training is performed by minimizing a loss function, J(x; θ ,   ∅ ), defined as follows: J x ; θ ,   ∅ = L x , g ∅ f θ x + Ω ( z ; θ , ∅ ) ”); and determining, in a case where a change in a classification standard of the classification model based on the loss is … more before and after re-training, that unknown data that is not classified into any one of the plurality of classes is included in the second data set (Kwon, Page 8, Paragraph 2, Lines 1-7, “We propose to train an autoencoder with a directional gradient constraint to model the normality. In particular, based on the interpretation of gradients from the Fisher kernel perspective, we enforce the alignment between gradients. This constraint makes the gradients from normal data aligned with each other and result in small changes to the manifold. On the other hand, the gradients from abnormal data will not be aligned with others and guide abrupt changes to the manifold”; Kwon, Page 9, Paragraph 2, Lines 1-6, “During training, L is first calculated from the forward propagation. Through the backpropagation, ∂ L ∂ ∅ i k is obtained without updating the weights. Based on the obtained gradient, the entire loss J is calculated and finally the weights are updated using backpropagated gradients from the loss J. An anomaly score is defined by the combination of the reconstruction error and the gradient loss as L + βLgrad”; The “anomaly score” is considered to be the “change in a classification standard” as it quantifies the change of network weights through the “gradient loss”). Kwon does not explicitly teach comparing the change in a classification standard with a predetermined standard. Kanishima teaches determining an input is unknown data by comparing an anomaly score with a threshold (Kanishima, Paragraph 0052, Lines 4-8, “if the anomaly degree is equal to or greater than a threshold value, it can be determined that the target data x*n is anomalous data. In contrast, if the anomaly degree is less than the threshold value, it can be determined that the target data x*n is normal data”). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to have modified the unknown data detection method of Kwon to include comparing the anomaly score with a threshold as taught by Kanishima. The motivation to do so would have been the ability to output a definitive determination result based on the anomalous input, rather than just an anomaly score (Kanishima, Paragraph 0052, Lines 1-3, “Further, the model execution unit 401 may determine whether or not the data is anomalous data based on the anomaly degree and output a determination result”). Regarding claim 2, the rejection of claim 1 is incorporated, and further, the proposed combination teaches wherein the classification standard is a weight that specifies a determination plane that indicates a boundary of each class in the classification model (Kwon, Page 9, Section 5.1, Lines 1-7, “We conduct anomaly detection experiments to both qualitatively and quantitatively evaluate the performance of the gradient-based representations. In particular, we perform abnormal class detection and abnormal condition detection using the gradient constraint and compare GradCon with other state-of-the-art activation-based anomaly detection algorithms. In abnormal class detection, images from one class of a dataset are considered as inliers and used for the training. Images from other classes are considered as outliers”; Kwon, Page 2, Paragraph 2, Lines 4-8, “During training, the gradients with respect to the weights provide directional information to update the neural network and learn knowledge that it has not learned. The gradients from normal data do not guide a significant change of the current weight. However, the gradients from abnormal data guide more drastic updates on the network to fully represent data”; A person of ordinary skill in the art would recognize that any weight of the classification model "specifies a determination plane that indicates a boundary" ). Regarding claim 4, the rejection of claim 1 is incorporated, and further, the proposed combination teaches wherein, in the processing of re-training, a restorer that restores each piece of data included in the second data set is trained from an output or an intermediate output when each piece of data included in the second data set is input to the classification model before re-training, and re-training of the classification model is executed by using, as the loss, an error between each piece of data included in the second data set and data restored by the restorer (Kwon, Page 5, Section 3.1, Lines 1-9, “We use an autoencoder, which is an unsupervised representation learning framework to explain the geometric interpretation of gradients. An autoencoder consists of an encoder, f θ , and a decoder, g ∅ . From an input image, x, a latent variable, z, is generated as z = f θ x and a reconstructed image is obtained by feeding the latent variable into the decoder, g ∅ f θ x . The training is performed by minimizing a loss function, J(x; θ ,   ∅ ), defined as follows: J x ; θ ,   ∅ = L x , g ∅ f θ x + Ω ( z ; θ , ∅ ) (1) where L is a reconstruction error, which measures the dissimilarity between the input and the reconstructed image and Ω is a regularization term for the latent variable”; The “decoder” is considered to be the “restorer”). Regarding claim 5, Kwon teaches An information processing apparatus comprising: a memory; and a processor coupled to the memory (Kwon, Page 14, Computational Efficiency of GradCon, Lines 2-5, “To show the computational efficiency of GradCon, we measure the average inference time per image using a machine with two GTX Titan X GPUs and compare computation time”) and configured to: re-train a classification model that has been trained by using a first data set and that classifies input data into any one of a plurality of classes by using a loss calculatable based on a second data set that is different from the first data set (Kwon, Page 9, Section 5.1, Lines 5-7, “In abnormal class detection, images from one class of a dataset are considered as inliers and used for the training. Images from other classes are considered as outliers”; Kwon, Page 5, Section 3.1, Paragraph 2, Lines 2-3, “The autoencoder is trained to accurately reconstruct training images and the reconstructed training images form a manifold”; Kwon, Page 6, Lines 2-3, “During testing, any given input to the autoencoder is projected onto the reconstructed image manifold”; Kwon, Page 6, Lines 8-9, “Since the abnormal image has not been utilized for training, it will be poorly reconstructed”; The “training images” are considered to be the “first data set” and the data set used during testing that contains “abnormal image[s]” is considered to be the “second data set” ; Kwon, Page 5, Section 3.1, Lines 5-6, and Equation 1, “The training is performed by minimizing a loss function, J(x; θ ,   ∅ ), defined as follows: J x ; θ ,   ∅ = L x , g ∅ f θ x + Ω ( z ; θ , ∅ ) ”); and determine, in a case where a change in a classification standard of the classification model based on the loss is … more before and after re-training, that unknown data that is not classified into any one of the plurality of classes is included in the second data set (Kwon, Page 8, Paragraph 2, Lines 1-7, “We propose to train an autoencoder with a directional gradient constraint to model the normality. In particular, based on the interpretation of gradients from the Fisher kernel perspective, we enforce the alignment between gradients. This constraint makes the gradients from normal data aligned with each other and result in small changes to the manifold. On the other hand, the gradients from abnormal data will not be aligned with others and guide abrupt changes to the manifold”; Kwon, Page 9, Paragraph 2, Lines 1-6, “During training, L is first calculated from the forward propagation. Through the backpropagation, ∂ L ∂ ∅ i k is obtained without updating the weights. Based on the obtained gradient, the entire loss J is calculated and finally the weights are updated using backpropagated gradients from the loss J. An anomaly score is defined by the combination of the reconstruction error and the gradient loss as L + βLgrad”; The “anomaly score” is considered to be the “change in a classification standard” as it quantifies the change of network weights through the “gradient loss”). Kwon does not explicitly teach comparing the change in a classification standard with a predetermined standard. Kanishima teaches determining an input is unknown data by comparing an anomaly score with a threshold (Kanishima, Paragraph 0052, Lines 4-8, “if the anomaly degree is equal to or greater than a threshold value, it can be determined that the target data x*n is anomalous data. In contrast, if the anomaly degree is less than the threshold value, it can be determined that the target data x*n is normal data”). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to have modified the unknown data detection method of Kwon to include comparing the anomaly score with a threshold as taught by Kanishima. The motivation to do so would have been the ability to output a definitive determination result based on the anomalous input, rather than just an anomaly score (Kanishima, Paragraph 0052, Lines 1-3, “Further, the model execution unit 401 may determine whether or not the data is anomalous data based on the anomaly degree and output a determination result”). Regarding claim 6, the rejection of claim 5 is incorporated, and further, the proposed combination teaches wherein the classification standard is a weight that specifies a determination plane that indicates a boundary of each class in the classification model (Kwon, Page 9, Section 5.1, Lines 1-7, “We conduct anomaly detection experiments to both qualitatively and quantitatively evaluate the performance of the gradient-based representations. In particular, we perform abnormal class detection and abnormal condition detection using the gradient constraint and compare GradCon with other state-of-the-art activation-based anomaly detection algorithms. In abnormal class detection, images from one class of a dataset are considered as inliers and used for the training. Images from other classes are considered as outliers”; Kwon, Page 2, Paragraph 2, Lines 4-8, “During training, the gradients with respect to the weights provide directional information to update the neural network and learn knowledge that it has not learned. The gradients from normal data do not guide a significant change of the current weight. However, the gradients from abnormal data guide more drastic updates on the network to fully represent data”; A person of ordinary skill in the art would recognize that any weight of the classification model "specifies a determination plane that indicates a boundary" ). Regarding claim 8, the rejection of claim 5 is incorporated, and further, the proposed combination teaches wherein, in the processing of re-training, a restorer that restores each piece of data included in the second data set is trained from an output or an intermediate output when each piece of data included in the second data set is input to the classification model before re-training, and re-training of the classification model is executed by using, as the loss, an error between each piece of data included in the second data set and data restored by the restorer (Kwon, Page 5, Section 3.1, Lines 1-9, “We use an autoencoder, which is an unsupervised representation learning framework to explain the geometric interpretation of gradients. An autoencoder consists of an encoder, f θ , and a decoder, g ∅ . From an input image, x, a latent variable, z, is generated as z = f θ x and a reconstructed image is obtained by feeding the latent variable into the decoder, g ∅ f θ x . The training is performed by minimizing a loss function, J(x; θ ,   ∅ ), defined as follows: J x ; θ ,   ∅ = L x , g ∅ f θ x + Ω ( z ; θ , ∅ ) (1) where L is a reconstruction error, which measures the dissimilarity between the input and the reconstructed image and Ω is a regularization term for the latent variable”; The “decoder” is considered to be the “restorer”). Regarding claim 9, Kwon teaches A determination method comprising: re-training a classifi
Read full office action

Prosecution Timeline

Mar 02, 2023
Application Filed
Nov 19, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602592
NOISE COMMUNICATION FOR FEDERATED LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12596916
CONSTRAINED MASKING FOR SPARSIFICATION IN MACHINE LEARNING
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+58.3%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month