Prosecution Insights
Last updated: April 19, 2026
Application No. 18/179,778

DATA PROCESSING METHOD AND ELECTRONIC DEVICE

Non-Final OA §101§102§103§112
Filed
Mar 07, 2023
Examiner
HAEFNER, KAITLYN RENEE
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
1 (Non-Final)
50%
Grant Probability
Moderate
1-2
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
2 granted / 4 resolved
-5.0% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
32 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
31.1%
-8.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 4 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is in response to the application filed 03/07/2023. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The abstract of the disclosure is objected to because the word count exceeds 150 words. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claim 6 is objected to because of the following informalities: Regarding claim 6 the Examiner respectfully notes that claim 6 is a method claim and the limitation of “if the score value is not higher than a preset threshold, determining a first attribute of the data to be detected” is a contingent limitation and therefore under the broadest reasonable interpretation these limitation may not be performed (“The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met.” MPEP 2111.04(II)). Accordingly the Examiner recommends the Applicant positively recite these limitations to state “determine the score value is not higher than a preset threshold to determine a first attribute of data to be detected…” to avoid a contingent interpretation of these limitations. Regarding claim 6 the Examiner respectfully notes that claim 6 is a method claim and the limitation of “if the score value is higher than the preset threshold, determining a second attribute of the data to be detected” is a contingent limitation and therefore under the broadest reasonable interpretation these limitation may not be performed (“The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met.” MPEP 2111.04(II)). Accordingly the Examiner recommends the Applicant positively recite these limitations to state “determine that the score value is higher than the preset threshold to determine a second attribute of the data to be detected…” to avoid a contingent interpretation of these limitations. Appropriate correction is required. Claim Interpretation In [00117] of the instant specification, the applicant notes, “A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.” Therefore, the examiner notes that the computer readable storage medium is understood to mean non-transitory. The examiner further respectfully recommends amending the claim language to indicate the computer readable storage medium is non-transitory to further aid in the understanding of the reading public. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the reconstructed data item is input into the generative sub-model to obtain the first output data item" in lines 9-10. It is unclear as to what part of the architecture the generative sub-model is referring to. Specifically, according to Fig. 3, there are two generative sub-models. Therefore, it is unclear as to which generative sub-model is being referenced. Additionally, it is unclear as to whether there are multiple generative sub-models or one generative sub-model based on the recited limitations and the drawings. For purposes of examination, Examiner has interpreted there to be one generative sub-model. Claim 3 recites the limitation "a training objective of the second sub-function" in lines 3-4. Specifically, it is unclear as to whether this training objective of the second sub-function is one of the training objectives recited in claim 2, line 8. For purposes of examination, Examiner has interpreted this training objective of the second sub-function to be a new training objective. Claim 8 recites the limitation "inputting the reconstructed data item into the generative sub-model to obtain the first output data item" in lines 4-5. It is unclear as to what part of the architecture the generative sub-model is referring to. Specifically, according to Fig. 3, there are two generative sub-models. Therefore, it is unclear as to which generative sub-model is being referenced. Additionally, it is unclear as to whether there are multiple generative sub-models or one generative sub-model based on the recited limitations and the drawings. For purposes of examination, Examiner has interpreted there to be one generative sub-model. Claim 11 recites the limitation "a training objective of the second sub-function" in line 6. Specifically, it is unclear as to whether this training objective of the second sub-function is one of the training objectives recited in claim 9, line 11. For purposes of examination, Examiner has interpreted this training objective of the second sub-function to be a new training objective. Claim 13 recites the limitation "the reconstructed data item is input into the generative sub-model to obtain the first output data item" in lines 11-12. It is unclear as to what part of the architecture the generative sub-model is referring to. Specifically, according to Fig. 3, there are two generative sub-models. Therefore, it is unclear as to which generative sub-model is being referenced. Additionally, it is unclear as to whether there are multiple generative sub-models or one generative sub-model based on the recited limitations and the drawings. For purposes of examination, Examiner has interpreted there to be one generative sub-model. Claim 15 recites the limitation "a training objective of the second sub-function" in line 4. Specifically, it is unclear as to whether this training objective of the second sub-function is one of the training objectives recited in claim 14, lines 8-9. For purposes of examination, Examiner has interpreted this training objective of the second sub-function to be a new training objective. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 9, 11-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Subject Matter Eligibility Analysis Step 1: Claim 1 recites a method and is thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 1 recites determining an attribute of the data to be detected … the attribute indicating whether the data to be detected is anomalous data, (This limitation is a mental process as it encompasses a human mentally determining an attribute to be detected.) Therefore, claim 1 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 1 further recites additional elements of obtaining data to be detected (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) using a trained anomaly detection model (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) wherein the anomaly detection model is trained based on a difference between a reconstructed data item and a normal data item and a difference between a first output data item and the reconstructed data item (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) wherein during training, the normal data item is input into a generative sub-model of the anomaly detection model to obtain the reconstructed data item, (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) and the reconstructed data item is input into the generative sub-model to obtain the first output data item. (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Therefore, claim 1 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 1 do not provide significantly more than the abstract idea itself, taken alone and in combination because obtaining data to be detected is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). using a trained anomaly detection model uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). wherein the anomaly detection model is trained based on a difference between a reconstructed data item and a normal data item and a difference between a first output data item and the reconstructed data item uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). wherein during training, the normal data item is input into a generative sub-model of the anomaly detection model to obtain the reconstructed data item is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). and the reconstructed data item is input into the generative sub-model to obtain the first output data item is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Therefore, claim 1 is subject-matter ineligible. Regarding Claim 2: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 2 recites wherein the first loss function is constructed based on the difference between the reconstructed data item and the normal data item and the difference between the first output data item and the reconstructed data item (This limitation is a mental process as it encompasses a human mentally constructing a loss function based on differences.) the first loss function comprises a first sub-function and a second sub-function, (This limitation is a mental process as it encompasses a human mentally constructing a loss function with sub-functions.) the first sub-function is obtained based on the difference between the reconstructed data item and the normal data item, (This limitation is a mental process as it encompasses a human mentally obtaining the sub-function based on a difference.) and the second sub-function is obtained based on the difference between the first output data item and the reconstructed data item, (This limitation is a mental process as it encompasses a human mentally obtaining the sub-function based on a difference.) wherein training objectives of the first sub-function and the second sub-function are opposite (This limitation is a mental process as it encompasses a human mentally creating sub-functions with opposite training objectives.) Therefore, claim 2 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 2 further recites additional elements of wherein the anomaly detection model is trained based on a first loss function, (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 2 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 2 do not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the anomaly detection model is trained based on a first loss function uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 2 is subject-matter ineligible. Regarding Claim 3: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 3 recites the same abstract ideas as claim 1. Therefore, claim 3 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 3 further recites additional elements of wherein the anomaly detection model is further trained based on a second loss function, wherein the second loss function comprises a third sub-function, a training objective of the third sub-function is consistent with a training objective of the second sub-function, the third sub-function is obtained based on a difference between a second output data item and an anomalous data item in a training set, (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) and the second output data item is obtained by inputting the anomalous data item in the training set into the generative sub-model. (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Therefore, claim 3 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 3 do not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the anomaly detection model is further trained based on a second loss function, wherein the second loss function comprises a third sub-function, a training objective of the third sub-function is consistent with a training objective of the second sub-function, the third sub-function is obtained based on a difference between a second output data item and an anomalous data item in a training set uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). and the second output data item is obtained by inputting the anomalous data item in the training set into the generative sub-model is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Therefore, claim 3 is subject-matter ineligible. Regarding Claim 4: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 4 recites for determining whether the reconstructed data item is true or false (This limitation is a mental process as it encompasses a human mentally determining whether data is true or false.) Therefore, claim 4 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 4 further recites additional elements of wherein the trained anomaly detection model further comprises a discriminative sub-model (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 4 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 4 do not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the trained anomaly detection model further comprises a discriminative sub-model uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 4 is subject-matter ineligible. Regarding Claim 5: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 5 recites the same abstract ideas as claim 4. Therefore, claim 5 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 5 further recites additional elements of wherein the trained anomaly detection model is obtained by training the generative sub-model and the discriminative sub-model in an adversarial manner (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 5 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 5 do not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the trained anomaly detection model is obtained by training the generative sub-model and the discriminative sub-model in an adversarial manner uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 5 is subject-matter ineligible. Regarding Claim 6: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 6 recites determining a score value of the data to be detected … the score value representing a difference between data obtained by the anomaly detection model by reconstructing the data to be detected and the data to be detected; (This limitation is a mental process as it encompasses a human mentally determining a score value of the data to be detected.) if the score value is not higher than a preset threshold, determining a first attribute of the data to be detected, the first attribute indicating that the data to be detected is normal data (This limitation is a mental process as it encompasses a human mentally determining a first attribute of the data indicating that the data is normal.) if the score value is higher than the preset threshold, determining a second attribute of the data to be detected, the second attribute indicating that the data to be detected is anomalous data. (This limitation is a mental process as it encompasses a human mentally determining a first attribute of the data indicating that the data is anomalous.) Therefore, claim 6 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 6 further recites additional elements of by using the trained anomaly detection model, (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 6 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 6 do not provide significantly more than the abstract idea itself, taken alone and in combination because by using the trained anomaly detection model uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 6 is subject-matter ineligible. Regarding Claim 7: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 7 recites the same abstract ideas as claim 1. Therefore, claim 7 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 7 further recites additional elements of wherein the data to be detected belongs to any of the following categories: audio data, electrocardiogram data, electroencephalogram data, image data, video data, point cloud data, or volume data (This element does not integrate the abstract idea into a practical application because it recites a technological environment in which to apply a judicial exception (see MPEP 2106.05(h)).) Therefore, claim 7 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 7 do not provide significantly more than the abstract idea itself, taken alone and in combination because wherein the data to be detected belongs to any of the following categories: audio data, electrocardiogram data, electroencephalogram data, image data, video data, point cloud data, or volume data specifies a particular technological environment to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(h)). Therefore, claim 7 is subject-matter ineligible. Regarding Claim 9: Subject Matter Eligibility Analysis Step 1: Claim 9 recites a method and is thus a process, one of the four statutory categories of patentable subject matter. Claim 9 further recites all limitations from claim 8 since claim 9 depends on claim 8. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 9 recites constructing a first loss function based on the difference between the reconstructed data item and the normal data item and the difference between the first output data item and the reconstructed data item, (This limitation is a mental process as it encompasses a human mentally constructing a loss function based on differences.) wherein the first loss function comprises a first sub-function and a second sub-function, (This limitation is a mental process as it further describes the mental process of constructing a loss function.) the first sub-function is obtained based on the difference between the reconstructed data item and the normal data item, (This limitation is a mental process as it describes the mental process of constructing a loss function by obtaining a sub-function based on a difference.) and the second sub-function is obtained based on the difference between the first output data item and the reconstructed data item; (This limitation is a mental process as it describes the mental process of constructing a loss function by obtaining a sub-function based on a difference.) wherein training objectives of the first sub-function and the second sub-function are opposite. (This limitation is a mental process as it encompasses a human mentally creating sub-functions with opposite training objectives.) Therefore, claim 9 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 9 further recites additional elements of inputting a normal data item in a training set into a generative sub-model of the anomaly detection model to obtain a reconstructed data item; (claim 8) (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) inputting the reconstructed data item into the generative sub-model to obtain a first output data item; (claim 8) (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) training the anomaly detection model based on a difference between the reconstructed data item and the normal data item and a difference between the first output data item and the reconstructed data item. (claim 8) (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) wherein the training the anomaly detection model based on difference between the reconstructed data item and the normal data item and a difference between the first output data item and the reconstructed data item comprises (claim 9) (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) training the anomaly detection model based on the first loss function (claim 9) (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 9 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 9 do not provide significantly more than the abstract idea itself, taken alone and in combination because inputting a normal data item in a training set into a generative sub-model of the anomaly detection model to obtain a reconstructed data item (claim 8) is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). inputting the reconstructed data item into the generative sub-model to obtain a first output data item (claim 8) is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). training the anomaly detection model based on a difference between the reconstructed data item and the normal data item and a difference between the first output data item and the reconstructed data item (claim 8) uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). wherein the training the anomaly detection model based on difference between the reconstructed data item and the normal data item and a difference between the first output data item and the reconstructed data item comprises (claim 9) uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). training the anomaly detection model based on the first loss function (claim 9) uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 9 is subject-matter ineligible. Regarding Claim 11: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 11 recites the same abstract ideas as claim 9. Therefore, claim 11 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 11 further recites additional elements of inputting the anomalous data item in the training set into the generative sub-model to obtain a second output data item (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) training the anomaly detection model based on a second loss function, wherein the second loss function comprises a third sub-function, a training objective of the third sub-function is consistent with a training objective of the second sub-function, and the third sub-function is obtained based on a difference between the second output data item and the anomalous data item. (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 11 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 11 do not provide significantly more than the abstract idea itself, taken alone and in combination because inputting the anomalous data item in the training set into the generative sub-model to obtain a second output data item is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). training the anomaly detection model based on a second loss function, wherein the second loss function comprises a third sub-function, a training objective of the third sub-function is consistent with a training objective of the second sub-function, and the third sub-function is obtained based on a difference between the second output data item and the anomalous data item uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 11 is subject-matter ineligible. Regarding Claim 12: Subject Matter Eligibility Analysis Step 1: Claim 12 recites a method and is thus a process, one of the four statutory categories of patentable subject matter. Claim 12 further recites all limitations from claim 8 since claim 12 depends on claim 8. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 12 recites determining whether the reconstructed data item is true or false. (This limitation is a mental process as it encompasses a human mentally determining whether data is true or false.) Therefore, claim 12 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 12 further recites additional elements of inputting a normal data item in a training set into a generative sub-model of the anomaly detection model to obtain a reconstructed data item; (claim 8) (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) inputting the reconstructed data item into the generative sub-model to obtain a first output data item; (claim 8) (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) training the anomaly detection model based on a difference between the reconstructed data item and the normal data item and a difference between the first output data item and the reconstructed data item. (claim 8) (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) wherein the anomaly detection model further comprises a discriminative sub-model for (claim 12) (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) Therefore, claim 12 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 12 do not provide significantly more than the abstract idea itself, taken alone and in combination because inputting a normal data item in a training set into a generative sub-model of the anomaly detection model to obtain a reconstructed data item (claim 8) is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). inputting the reconstructed data item into the generative sub-model to obtain a first output data item (claim 8) is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). training the anomaly detection model based on a difference between the reconstructed data item and the normal data item and a difference between the first output data item and the reconstructed data item (claim 8) uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). wherein the anomaly detection model further comprises a discriminative sub-model for (claim 8) uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). Therefore, claim 12 is subject-matter ineligible. Regarding Claim 13: Subject Matter Eligibility Analysis Step 1: Claim 1 recites a computer readable storage medium and is thus an article of manufacture, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 13 recites determining an attribute of the data to be detected … the attribute indicating whether the data to be detected is anomalous data, (This limitation is a mental process as it encompasses a human mentally determining an attribute to be detected.) Therefore, claim 13 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 13 further recites additional elements of A computer readable storage medium having machine-executable instructions stored thereon, the machine-executable instructions, when executed by a device, cause the device to perform: (This element does not integrate the abstract idea into a practical application because it recited generic computing components on which to perform the abstract idea (see MPEP 2106.05(f)).) obtaining data to be detected (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) using a trained anomaly detection model (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) wherein the anomaly detection model is trained based on a difference between a reconstructed data item and a normal data item and a difference between a first output data item and the reconstructed data item (This element does not integrate the abstract idea into a practical application because it amounts to mere “apply it on a computer” (see MPEP 2106.05(f)).) wherein during training, the normal data item is input into a generative sub-model of the anomaly detection model to obtain the reconstructed data item, (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) and the reconstructed data item is input into the generative sub-model to obtain the first output data item. (This element does not integrate the abstract idea into a practical application because it recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g)).) Therefore, claim 13 is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: The additional elements of claim 13 do not provide significantly more than the abstract idea itself, taken alone and in combination because A computer readable storage medium having machine-executable instructions stored thereon, the machine-executable instructions, when executed by a device, cause the device to perform uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). obtaining data to be detected is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). using a trained anomaly detection model uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). wherein the anomaly detection model is trained based on a difference between a reconstructed data item and a normal data item and a difference between a first output data item and the reconstructed data item uses a computer as a tool to perform the abstract idea and cannot provide significantly more (see MPEP 2106.05(f)). wherein during training, the normal data item is input into a generative sub-model of the anomaly detection model to obtain the reconstructed data item is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). and the reconstructed data item is input into the generative sub-model to obtain the first output data item is the well understood, routine, and conventional activity of “transmitting or receiving data over a network” (see MPEP 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network)). Therefore, claim 13 is subject-matter ineligible. Regarding claim 14, claim 14 recites substantially similar limitations to claim 2, and is therefore rejected under the same analysis. Regarding claim 15, claim 15 recites substantially similar limitations to claim 3, and is therefore rejected under the same analysis. Regarding Claim 16: Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 16 recites wherein the difference between the reconstructed data item and the normal data item is smaller than a first threshold, and the difference between the first output data item and the reconstructed data item is larger than a second threshold. (This limitation is a mental process as it further describes the mental process of constructing a loss function based on differences in claim 14.) Therefore, claim 16 recites an abstract idea. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 16 does not further recite any additional elements. Therefore, claim # is not integrated into a practical application. Subject Matter Eligibility Analysis Step 2B: Since there are no additional elements, claim 16 does not provide significantly more than the abstract idea itself, taken alone and in combination. Therefore, claim 16 is subject-matter ineligible. Regarding claim 17, claim 17 recites substantially similar limitations to claim 4, and is therefore rejected under the same analysis. Regarding claim 18, claim 18 recites substantially similar limitations to claim 5, and is therefore rejected under the same analysis. Regarding claim 19, claim 19 recites substantially similar limitations to claim 6, and is therefore rejected under the same analysis. Regarding claim 20, claim 20 recites substantially similar limitations to claim 7, and is therefore rejected under the same analysis. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-9, and 11-12 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Son et al. (“Anomaly Detection with Adversarial Dual Autoencoders”) (hereafter referred to as Son). Regarding claim 1, Son teaches A data processing method, comprising: obtaining data to be detected (Son, page 2, 2nd paragraph, “Inspired by this, we propose a semi-supervised GAN-based method for anomaly detection task such that both generator and discriminator are made up of autoencoders, called Adversarial Dual Autoencoders (ADAE). Anomalies during testing phase are detected using discriminator pixel-wise reconstruction error. This method is tested on multiple datasets such as MNIST, CIFAR-10 and Multimodal Brain Tumor Segmentation (BRATS) 2017 dataset and it found to achieve state-of-the-art results.” Examiner notes that the datasets are the data to be detected.); and determining an attribute of the data to be detected using a trained anomaly detection model, the attribute indicating whether the data to be detected is anomalous data (Son, page 4, 2nd to last paragraph, “We leverage the reconstruction error in the discriminator as the anomaly score, since the discriminator is trained to separate the normal and generated data distribution” where “Formally, for each input x ^ , anomaly score A ( x ^ ) is calculated as: A ( x ^ ) = | | x ^   -   D ( G ( x ^ ) ) | | 2 This choice is in contrast with many other anomaly detection methods which uses generator reconstruction error for calculating anomaly scores. We notice that our choice of anomaly score calculation allows the better split between normal and anomalous scores, leading to superior performance. We then define a threshold ϕ from which anomalies are determined. A test input x 1 ^ is considered to be anomalous if A ( x ^ ) > ϕ .”(Son, page 5, 1st paragraph). Examiner notes that the attribute of the data to be detected is the anomaly score.), wherein the anomaly detection model is trained based on a difference between a reconstructed data item and a normal data item and a difference between a first output data item and the reconstructed data item (Son, page 4, 5th paragraph, “The generator, besides trying to reconstruct the real input, would also attempt to match the generated data distribution to real data distribution, namely: L G = | | X -   G ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ” and Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the anomaly detection model.), wherein during training, the normal data item is input into a generative sub-model of the anomaly detection model to obtain the reconstructed data item (Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the generative sub-model.), and the reconstructed data item is input into the generative sub-model to obtain the first output data item (Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the generative sub-model.). Regarding claim 2, Son teaches The method according to claim 1, wherein the anomaly detection model is trained based on a first loss function, wherein the first loss function is constructed based on the difference between the reconstructed data item and the normal data item and the difference between the first output data item and the reconstructed data item (Son, page 4, 5th paragraph, “The generator, besides trying to reconstruct the real input, would also attempt to match the generated data distribution to real data distribution, namely: L D = | | X -   G ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ”. Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item.), the first loss function comprises a first sub-function and a second sub-function, the first sub-function is obtained based on the difference between the reconstructed data item and the normal data item, and the second sub-function is obtained based on the difference between the first output data item and the reconstructed data item (Son, page 4, 5th paragraph, “The generator, besides trying to reconstruct the real input, would also attempt to match the generated data distribution to real data distribution, namely: L G = | | X -   G ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ”. Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the first sub-function is | | X -   G ( X ) | | 1 and the second sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 .), wherein training objectives of the first sub-function and the second sub-function are opposite (Son, page 4, 2nd-3rd paragraph, “Thus, we propose ADAE, a network architecture with two sub-networks made up of autoencoders. In order to fulfil the typical role of the respective GAN sub-networks the adversarial training objective is defined as the pixel-wise error between the reconstructed image of data X through G and of generated image G(X) through D: | | G ( X ) - D ( G ( X ) ) | | 1 . The discriminator is trained to maximize this error, which causes it to fail to reconstruct the inputs if they are thought to belong to generated distribution. In essence, the discriminator attempts to separate the real and generated data distributions. On the other hand, the generator is trained to minimize this error, forces it to produce images in the real data distribution so as to ‘fool’ the discriminator.” Examiner notes that the training objectives of the first and second sub-functions are to minimize and maximize the error.). Regarding claim 3, Son teaches The method according to claim 2, wherein the anomaly detection model is further trained based on a second loss function, wherein the second loss function comprises a third sub-function (Son, page 4, 5th paragraph, “The discriminator training objective is to reconstruct faithfully the real inputs while fail to do so for generated input, as shown below: L D = | | X -   D ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ”. Examiner notes that L D is the second loss function. Examiner further notes that the third sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 .)), a training objective of the third sub-function is consistent with a training objective of the second sub-function (Son, page 4, 2nd-3rd paragraph, “Thus, we propose ADAE, a network architecture with two sub-networks made up of autoencoders. In order to fulfil the typical role of the respective GAN sub-networks the adversarial training objective is defined as the pixel-wise error between the reconstructed image of data X through G and of generated image G(X) through D: | | G ( X ) - D ( G ( X ) ) | | 1 . The discriminator is trained to maximize this error, which causes it to fail to reconstruct the inputs if they are thought to belong to generated distribution. In essence, the discriminator attempts to separate the real and generated data distributions. On the other hand, the generator is trained to minimize this error, forces it to produce images in the real data distribution so as to ‘fool’ the discriminator. The discriminator training objective is to reconstruct faithfully the real inputs while fail to do so for generated input, as shown below: L D = | | X -   D ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ”. Examiner notes that the training objective of the third sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 and a training objective of a second sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 . ), the third sub-function is obtained based on a difference between a second output data item and an anomalous data item in a training set (Son, page 4, 5th paragraph, “The discriminator training objective is to reconstruct faithfully the real inputs while fail to do so for generated input, as shown below: L D = | | X -   D ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ” and Son, page 5, Figure 2 PNG media_image2.png 456 816 media_image2.png Greyscale Examiner notes that L D is the second loss function. Examiner further notes that the third sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 . Examiner further notes that the second output data item is D(G(X)) and the anomalous data item is G(X).), and the second output data item is obtained by inputting the anomalous data item in the training set into the generative sub-model (Son, page 5, Figure 2 PNG media_image2.png 456 816 media_image2.png Greyscale Examiner notes that the Generator and the Discriminator is the generative sub-model. Examiner further notes that the second output data item is D(G(X)) and the anomalous data item is G(X).). Regarding claim 4, Son teaches The method according to claim 1, wherein the trained anomaly detection model further comprises a discriminative sub-model for determining whether the reconstructed data item is true or false (Son, page 5, Figure 2 PNG media_image2.png 456 816 media_image2.png Greyscale and “in essence, the discriminator attempts to separate the real and the generated data distributions” (Son, page 4, 3rd paragraph). Examiner notes that the discriminator is the discriminative sub-model. Examiner further notes that G(X) is the reconstructed data item. Examiner additionally notes that by separating the real and generated data, the discriminator is determining whether the data, or reconstructed data item, is true or false.). Regarding claim 5, Son teaches The method according to claim 4, wherein the trained anomaly detection model is obtained by training the generative sub-model and the discriminative sub-model in an adversarial manner (Son, page 4, 2nd-3rd paragraph, “Thus, we propose ADAE, a network architecture with two sub-networks made up of autoencoders. In order to fulfil the typical role of the respective GAN sub-networks the adversarial training objective is defined as the pixel-wise error between the reconstructed image of data X through G and of generated image G(X) through D: | | G ( X ) - D ( G ( X ) ) | | 1 . The discriminator is trained to maximize this error, which causes it to fail to reconstruct the inputs if they are thought to belong to generated distribution. In essence, the discriminator attempts to separate the real and generated data distributions. On the other hand, the generator is trained to minimize this error, forces it to produce images in the real data distribution so as to ‘fool’ the discriminator.” Examiner notes that the generator and discriminator is the trained anomaly detection model. Examiner further notes that the generator and discriminator is the generative sub-model and that the discriminator is the discriminative sub-model.). Regarding claim 6, Son teaches The method according to claim 1, wherein the determining the attribute of the data to be detected comprises: determining a score value of the data to be detected by using the trained anomaly detection model, the score value representing a difference between data obtained by the anomaly detection model by reconstructing the data to be detected and the data to be detected (Son, page 4, 2nd to last paragraph, “We leverage the reconstruction error in the discriminator as the anomaly score, since the discriminator is trained to separate the normal and generated data distribution” where “Formally, for each input x ^ , anomaly score A ( x ^ ) is calculated as: A ( x ^ ) = | | x ^   -   D ( G ( x ^ ) ) | | 2 This choice is in contrast with many other anomaly detection methods which uses generator reconstruction error for calculating anomaly scores. We notice that our choice of anomaly score calculation allows the better split between normal and anomalous scores, leading to superior performance. We then define a threshold ϕ from which anomalies are determined. A test input x 1 ^ is considered to be anomalous if A ( x ^ ) > ϕ .”(Son, page 5, 1st paragraph). Examiner notes that the score value of the data to be detected is the anomaly score. Examiner further notes that the data obtained by the anomaly detection model by reconstructing the data is D ( G ( x ^ ) ) and the data to be detected is x ^ ); if the score value is not higher than a preset threshold, determining a first attribute of the data to be detected, the first attribute indicating that the data to be detected is normal data (Son, page 4, 2nd to last paragraph, “We leverage the reconstruction error in the discriminator as the anomaly score, since the discriminator is trained to separate the normal and generated data distribution” where “Formally, for each input x ^ , anomaly score A ( x ^ ) is calculated as: A ( x ^ ) = | | x ^   -   D ( G ( x ^ ) ) | | 2 This choice is in contrast with many other anomaly detection methods which uses generator reconstruction error for calculating anomaly scores. We notice that our choice of anomaly score calculation allows the better split between normal and anomalous scores, leading to superior performance. We then define a threshold ϕ from which anomalies are determined. A test input x 1 ^ is considered to be anomalous if A ( x ^ ) > ϕ .”(Son, page 5, 1st paragraph). Examiner notes that the score value of the data to be detected is the anomaly score. Examiner further notes that the first attribute is the anomaly score not being higher than ϕ indicating the data is not anomalous, or normal.); if the score value is higher than the preset threshold, determining a second attribute of the data to be detected, the second attribute indicating that the data to be detected is anomalous data (Son, page 4, 2nd to last paragraph, “We leverage the reconstruction error in the discriminator as the anomaly score, since the discriminator is trained to separate the normal and generated data distribution” where “Formally, for each input x ^ , anomaly score A ( x ^ ) is calculated as: A ( x ^ ) = | | x ^   -   D ( G ( x ^ ) ) | | 2 This choice is in contrast with many other anomaly detection methods which uses generator reconstruction error for calculating anomaly scores. We notice that our choice of anomaly score calculation allows the better split between normal and anomalous scores, leading to superior performance. We then define a threshold ϕ from which anomalies are determined. A test input x 1 ^ is considered to be anomalous if A ( x ^ ) > ϕ .”(Son, page 5, 1st paragraph). Examiner notes that the score value of the data to be detected is the anomaly score. Examiner further notes that the second attribute is the anomaly score being higher than ϕ indicating the data is anomalous.). Regarding claim 7, Son teaches The method according to claim 1, wherein the data to be detected belongs to any of the following categories: audio data, electrocardiogram data, electroencephalogram data, image data, video data, point cloud data, or volume data (Son, page 5, 1st paragraph of 4.1 Datasets, “Firstly, we apply ADAE to two benchmark datasets MNIST [16] and CIFAR-10[17] for a fair comparison with other state-of-the-art methods. Then, we demonstrate the practicality of our solution with a real life problem of brain tumor detection using BRATS 2017 [18] dataset after training on healthy brain MRI images from Human Connectome Project (HCP) [19] dataset.” Examiner notes that the data is image data.). Regarding claim 8, Son teaches A method of training an anomaly detection model, comprising: inputting a normal data item in a training set into a generative sub-model of the anomaly detection model to obtain a reconstructed data item (Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the generative sub-model.); inputting the reconstructed data item into the generative sub-model to obtain a first output data item (Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the generative sub-model.); and training the anomaly detection model based on a difference between the reconstructed data item and the normal data item and a difference between the first output data item and the reconstructed data item (Son, page 4, 5th paragraph, “The generator, besides trying to reconstruct the real input, would also attempt to match the generated data distribution to real data distribution, namely: L G = | | X -   G ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ” and Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the anomaly detection model.). Regarding claim 9, claim 9 recites substantially similar limitations to claim 2, and is therefore rejected under the same analysis. Regarding claim 11, claim 11 recites substantially similar limitations to claim 3, and is therefore rejected under the same analysis. Regarding claim 12, claim 12 recites substantially similar limitations to claim 4, and is therefore rejected under the same analysis. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 10 and 13-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Son in view of Ye et al. (US 2022/0382622 A1) (hereafter referred to as Ye). Regarding claim 10, Son teaches the method of claim 8. Son does not teach, but Ye does teach wherein the difference between the reconstructed data item and the normal data item is smaller than a first threshold (Ye, page 14, paragraph 0038, “The variance value 154 may be a quantitative or qualitative difference between the input point data value 152 and the expected value 312 generated from the input point data value 152 using the trained model” where “For example, the detector 410 determines whether the variance value 154 is below a lower bound threshold value or above an upper bound threshold value” (Ye, page 15, paragraph 0048). Examiner notes that the input point data is the normal data item and the expected value generated from the input point value data is the reconstructed data item. Examiner further notes that the first threshold is the lower bound threshold.), and the difference between the first output data item and the reconstructed data item is larger than a second threshold (Ye, page 14, paragraph 0038, “The variance value 154 may represent a difference between an expected value for the particular point data value 152 based on the trained model 212 and an actual or recorded value (i.e., a ground truth) for the point data value 152” where “For example, the detector 410 determines whether the variance value 154 is below a lower bound threshold value or above an upper bound threshold value” (Ye, page 15, paragraph 0048). Examiner notes that the expected data is the reconstructed data item and the actual or recorded value is the first output data item. Examiner further notes that the second threshold is the upper bound threshold value. ). Son and Ye are considered analogous to the claimed invention because they both use autoencoders to detect anomalies in data. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to have modified the anomaly detection in Son to use thresholds as in Ye. Doing so is advantageous because “the detector 210 may determine whether variance values 154 satisfy the threshold variance values 412 for each input point data value 152 or only for specified historical point data values 152H and novel point data values 152N. [0048] Thus the threshold variance values 412 (or optionally, the plurality of threshold variance values 412) defines criteria for determining the anomalous point data value 152A” (Ye, page 15, paragraph 0047). Regarding claim 13, Son teaches obtaining data to be detected (Son, page 2, 2nd paragraph, “Inspired by this, we propose a semi-supervised GAN-based method for anomaly detection task such that both generator and discriminator are made up of autoencoders, called Adversarial Dual Autoencoders (ADAE). Anomalies during testing phase are detected using discriminator pixel-wise reconstruction error. This method is tested on multiple datasets such as MNIST, CIFAR-10 and Multimodal Brain Tumor Segmentation (BRATS) 2017 dataset and it found to achieve state-of-the-art results.” Examiner notes that the datasets are the data to be detected.); and determining an attribute of the data to be detected using a trained anomaly detection model, the attribute indicating whether the data to be detected is anomalous data (Son, page 4, 2nd to last paragraph, “We leverage the reconstruction error in the discriminator as the anomaly score, since the discriminator is trained to separate the normal and generated data distribution” where “Formally, for each input x ^ , anomaly score A ( x ^ ) is calculated as: A ( x ^ ) = | | x ^   -   D ( G ( x ^ ) ) | | 2 This choice is in contrast with many other anomaly detection methods which uses generator reconstruction error for calculating anomaly scores. We notice that our choice of anomaly score calculation allows the better split between normal and anomalous scores, leading to superior performance. We then define a threshold ϕ from which anomalies are determined. A test input x 1 ^ is considered to be anomalous if A ( x ^ ) > ϕ .”(Son, page 5, 1st paragraph). Examiner notes that the attribute of the data to be detected is the anomaly score.), wherein the anomaly detection model is trained based on a difference between a reconstructed data item and a normal data item and a difference between a first output data item and the reconstructed data item (Son, page 4, 5th paragraph, “The generator, besides trying to reconstruct the real input, would also attempt to match the generated data distribution to real data distribution, namely: L G = | | X -   G ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ” and Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the anomaly detection model.), wherein during training, the normal data item is input into a generative sub-model of the anomaly detection model to obtain the reconstructed data item (Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the generative sub-model.), and the reconstructed data item is input into the generative sub-model to obtain the first output data item (Son, page 3, Figure 1, PNG media_image1.png 356 852 media_image1.png Greyscale Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the entirety of Figure 1 is the generative sub-model.) Son does not teach, but Ye does teach A computer readable storage medium having machine-executable instructions stored thereon, the machine-executable instructions, when executed by a device, cause the device to perform (Ye, page 17, paragraph 0062, “These computer programs (also known as programs, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms ‘machine-readable medium’ and ‘computer-readable medium’ refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDS)) used to provide machine instructions and/or data to a programmable processor.”) Son and Ye are analogous to the claimed invention because they both use autoencoders to detect anomalies in data. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to have implemented the anomaly detection in Son on the computer readable storage medium in Ye . Thus, this would be applying a known technique (anomaly detection) to a known device (computer readable storage medium) ready for improvement to yield predictable results (detect anomalies) (MPEP 2143 I. (C) Use of known technique to improve similar devices (methods, or products) in the same way). Regarding claim 14, Son in view of Ye teaches the computer readable medium according to claim 13. Son further teaches The method according to claim 1, wherein the anomaly detection model is trained based on a first loss function, wherein the first loss function is constructed based on the difference between the reconstructed data item and the normal data item and the difference between the first output data item and the reconstructed data item (Son, page 4, 5th paragraph, “The generator, besides trying to reconstruct the real input, would also attempt to match the generated data distribution to real data distribution, namely: L D = | | X -   G ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ”. Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item.), the first loss function comprises a first sub-function and a second sub-function, the first sub-function is obtained based on the difference between the reconstructed data item and the normal data item, and the second sub-function is obtained based on the difference between the first output data item and the reconstructed data item (Son, page 4, 5th paragraph, “The generator, besides trying to reconstruct the real input, would also attempt to match the generated data distribution to real data distribution, namely: L G = | | X -   G ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ”. Examiner notes that G(X) is the reconstructed data item, X is the normal data item, and D(G(X)) is the first output data item. Examiner further notes that the first sub-function is | | X -   G ( X ) | | 1 and the second sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 .), wherein training objectives of the first sub-function and the second sub-function are opposite (Son, page 4, 2nd-3rd paragraph, “Thus, we propose ADAE, a network architecture with two sub-networks made up of autoencoders. In order to fulfil the typical role of the respective GAN sub-networks the adversarial training objective is defined as the pixel-wise error between the reconstructed image of data X through G and of generated image G(X) through D: | | G ( X ) - D ( G ( X ) ) | | 1 . The discriminator is trained to maximize this error, which causes it to fail to reconstruct the inputs if they are thought to belong to generated distribution. In essence, the discriminator attempts to separate the real and generated data distributions. On the other hand, the generator is trained to minimize this error, forces it to produce images in the real data distribution so as to ‘fool’ the discriminator.” Examiner notes that the training objectives of the first and second sub-functions are to minimize and maximize the error.). Regarding claim 15, Son in view of Ye teaches the computer readable medium according to claim 14. Son further teaches wherein the anomaly detection model is further trained based on a second loss function, wherein the second loss function comprises a third sub-function (Son, page 4, 5th paragraph, “The discriminator training objective is to reconstruct faithfully the real inputs while fail to do so for generated input, as shown below: L D = | | X -   D ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ”. Examiner notes that L D is the second loss function. Examiner further notes that the third sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 .)), a training objective of the third sub-function is consistent with a training objective of the second sub-function (Son, page 4, 2nd-3rd paragraph, “Thus, we propose ADAE, a network architecture with two sub-networks made up of autoencoders. In order to fulfil the typical role of the respective GAN sub-networks the adversarial training objective is defined as the pixel-wise error between the reconstructed image of data X through G and of generated image G(X) through D: | | G ( X ) - D ( G ( X ) ) | | 1 . The discriminator is trained to maximize this error, which causes it to fail to reconstruct the inputs if they are thought to belong to generated distribution. In essence, the discriminator attempts to separate the real and generated data distributions. On the other hand, the generator is trained to minimize this error, forces it to produce images in the real data distribution so as to ‘fool’ the discriminator. The discriminator training objective is to reconstruct faithfully the real inputs while fail to do so for generated input, as shown below: L D = | | X -   D ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ”. Examiner notes that the training objective of the third sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 and a training objective of a second sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 . ), the third sub-function is obtained based on a difference between a second output data item and an anomalous data item in a training set (Son, page 4, 5th paragraph, “The discriminator training objective is to reconstruct faithfully the real inputs while fail to do so for generated input, as shown below: L D = | | X -   D ( X ) | | 1 + | | G ( X ) - D ( G ( X ) ) | | 1 ” and Son, page 5, Figure 2 PNG media_image2.png 456 816 media_image2.png Greyscale Examiner notes that L D is the second loss function. Examiner further notes that the third sub-function is | | G ( X ) - D ( G ( X ) ) | | 1 . Examiner further notes that the second output data item is D(G(X)) and the anomalous data item is G(X).), and the second output data item is obtained by inputting the anomalous data item in the training set into the generative sub-model (Son, page 5, Figure 2 PNG media_image2.png 456 816 media_image2.png Greyscale Examiner notes that the Generator and the Discriminator is the generative sub-model. Examiner further notes that the second output data item is D(G(X)) and the anomalous data item is G(X).). Regarding claim 16, Son in view of Ye teaches the computer readable storage medium according to claim 14. Son does not teach, but Ye does teach wherein the difference between the reconstructed data item and the normal data item is smaller than a first threshold (Ye, page 14, paragraph 0038, “The variance value 154 may be a quantitative or qualitative difference between the input point data value 152 and the expected value 312 generated from the input point data value 152 using the trained model” where “For example, the detector 410 determines whether the variance value 154 is below a lower bound threshold value or above an upper bound threshold value” (Ye, page 15, paragraph 0048). Examiner notes that the input point data is the normal data item and the expected value generated from the input point value data is the reconstructed data item. Examiner further notes that the first threshold is the lower bound threshold.), and the difference between the first output data item and the reconstructed data item is larger than a second threshold (Ye, page 14, paragraph 0038, “The variance value 154 may represent a difference between an expected value for the particular point data value 152 based on the trained model 212 and an actual or recorded value (i.e., a ground truth) for the point data value 152” where “For example, the detector 410 determines whether the variance value 154 is below a lower bound threshold value or above an upper bound threshold value” (Ye, page 15, paragraph 0048). Examiner notes that the expected data is the reconstructed data item and the actual or recorded value is the first output data item. Examiner further notes that the second threshold is the upper bound threshold value. ). Son and Ye are considered analogous to the claimed invention because they both use autoencoders to detect anomalies in data. It would have been obvious to one having ordinary skill in the art prior to the effective filing date to have modified the anomaly detection in Son to use thresholds as in Ye. Doing so is advantageous because “the detector 210 may determine whether variance values 154 satisfy the threshold variance values 412 for each input point data value 152 or only for specified historical point data values 152H and novel point data values 152N. [0048] Thus the threshold variance values 412 (or optionally, the plurality of threshold variance values 412) defines criteria for determining the anomalous point data value 152A” (Ye, page 15, paragraph 0047). Regarding claim 17, Son in view of Ye teaches the computer readable medium according to claim 13. Son further teaches wherein the trained anomaly detection model further comprises a discriminative sub-model for determining whether the reconstructed data item is true or false (Son, page 5, Figure 2 PNG media_image2.png 456 816 media_image2.png Greyscale and “in essence, the discriminator attempts to separate the real and the generated data distributions” (Son, page 4, 3rd paragraph). Examiner notes that the discriminator is the discriminative sub-model. Examiner further notes that G(X) is the reconstructed data item. Examiner additionally notes that by separating the real and generated data, the discriminator is determining whether the data, or reconstructed data item, is true or false.). Regarding claim 18, Son in view of Ye teaches the computer readable medium according to claim 17. Son further teaches wherein the trained anomaly detection model is obtained by training the generative sub-model and the discriminative sub-model in an adversarial manner (Son, page 4, 2nd-3rd paragraph, “Thus, we propose ADAE, a network architecture with two sub-networks made up of autoencoders. In order to fulfil the typical role of the respective GAN sub-networks the adversarial training objective is defined as the pixel-wise error between the reconstructed image of data X through G and of generated image G(X) through D: | | G ( X ) - D ( G ( X ) ) | | 1 . The discriminator is trained to maximize this error, which causes it to fail to reconstruct the inputs if they are thought to belong to generated distribution. In essence, the discriminator attempts to separate the real and generated data distributions. On the other hand, the generator is trained to minimize this error, forces it to produce images in the real data distribution so as to ‘fool’ the discriminator.” Examiner notes that the generator and discriminator is the trained anomaly detection model. Examiner further notes that the generator and discriminator is the generative sub-model and that the discriminator is the discriminative sub-model.). Regarding claim 19, Son in view of Ye teaches the computer readable medium according to claim 13. Son further teaches wherein the determining the attribute of the data to be detected comprises: determining a score value of the data to be detected by using the trained anomaly detection model, the score value representing a difference between data obtained by the anomaly detection model by reconstructing the data to be detected and the data to be detected (Son, page 4, 2nd to last paragraph, “We leverage the reconstruction error in the discriminator as the anomaly score, since the discriminator is trained to separate the normal and generated data distribution” where “Formally, for each input x ^ , anomaly score A ( x ^ ) is calculated as: A ( x ^ ) = | | x ^   -   D ( G ( x ^ ) ) | | 2 This choice is in contrast with many other anomaly detection methods which uses generator reconstruction error for calculating anomaly scores. We notice that our choice of anomaly score calculation allows the better split between normal and anomalous scores, leading to superior performance. We then define a threshold ϕ from which anomalies are determined. A test input x 1 ^ is considered to be anomalous if A ( x ^ ) > ϕ .”(Son, page 5, 1st paragraph). Examiner notes that the score value of the data to be detected is the anomaly score. Examiner further notes that the data obtained by the anomaly detection model by reconstructing the data is D ( G ( x ^ ) ) and the data to be detected is x ^ ); if the score value is not higher than a preset threshold, determining a first attribute of the data to be detected, the first attribute indicating that the data to be detected is normal data (Son, page 4, 2nd to last paragraph, “We leverage the reconstruction error in the discriminator as the anomaly score, since the discriminator is trained to separate the normal and generated data distribution” where “Formally, for each input x ^ , anomaly score A ( x ^ ) is calculated as: A ( x ^ ) = | | x ^   -   D ( G ( x ^ ) ) | | 2 This choice is in contrast with many other anomaly detection methods which uses generator reconstruction error for calculating anomaly scores. We notice that our choice of anomaly score calculation allows the better split between normal and anomalous scores, leading to superior performance. We then define a threshold ϕ from which anomalies are determined. A test input x 1 ^ is considered to be anomalous if A ( x ^ ) > ϕ .”(Son, page 5, 1st paragraph). Examiner notes that the score value of the data to be detected is the anomaly score. Examiner further notes that the first attribute is the anomaly score not being higher than ϕ indicating the data is not anomalous, or normal.); if the score value is higher than the preset threshold, determining a second attribute of the data to be detected, the second attribute indicating that the data to be detected is anomalous data (Son, page 4, 2nd to last paragraph, “We leverage the reconstruction error in the discriminator as the anomaly score, since the discriminator is trained to separate the normal and generated data distribution” where “Formally, for each input x ^ , anomaly score A ( x ^ ) is calculated as: A ( x ^ ) = | | x ^   -   D ( G ( x ^ ) ) | | 2 This choice is in contrast with many other anomaly detection methods which uses generator reconstruction error for calculating anomaly scores. We notice that our choice of anomaly score calculation allows the better split between normal and anomalous scores, leading to superior performance. We then define a threshold ϕ from which anomalies are determined. A test input x 1 ^ is considered to be anomalous if A ( x ^ ) > ϕ .”(Son, page 5, 1st paragraph). Examiner notes that the score value of the data to be detected is the anomaly score. Examiner further notes that the second attribute is the anomaly score being higher than ϕ indicating the data is anomalous.). Regarding claim 20, Son in view of Ye teaches the computer readable medium according to claim 13. Son further teaches wherein the data to be detected belongs to any of the following categories: audio data, electrocardiogram data, electroencephalogram data, image data, video data, point cloud data, or volume data (Son, page 5, 1st paragraph of 4.1 Datasets, “Firstly, we apply ADAE to two benchmark datasets MNIST [16] and CIFAR-10[17] for a fair comparison with other state-of-the-art methods. Then, we demonstrate the practicality of our solution with a real life problem of brain tumor detection using BRATS 2017 [18] dataset after training on healthy brain MRI images from Human Connectome Project (HCP) [19] dataset.” Examiner notes that the data is image data.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Fan et al. also discloses anomaly detection using autoencoders and adversarial training methods. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN R HAEFNER whose telephone number is (571)272-1429. The examiner can normally be reached Monday - Thursday: 7:15 am - 5:15 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michelle Bechtold can be reached at (571) 431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.R.H./Examiner, Art Unit 2148 /MICHELLE T BECHTOLD/Supervisory Patent Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Mar 07, 2023
Application Filed
Jan 26, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602431
METHODS FOR PERFORMING INPUT-OUTPUT OPERATIONS IN A STORAGE SYSTEM USING ARTIFICIAL INTELLIGENCE AND DEVICES THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12572828
METHOD FOR INDUSTRY TEXT INCREMENT AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+66.7%)
4y 2m
Median Time to Grant
Low
PTA Risk
Based on 4 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month