Prosecution Insights
Last updated: April 19, 2026
Application No. 17/726,724

ANOMALY DETECTION IN UNKNOWN DOMAINS USING CONTENT-IRRELEVANT AND DOMAIN-IRRELEVANT COMPRESSED DATA

Non-Final OA §101§103
Filed
Apr 22, 2022
Examiner
JONES, CHARLES JEFFREY
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
3 (Non-Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
4 granted / 15 resolved
-28.3% vs TC avg
Strong +66% interview lift
Without
With
+65.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
27 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
34.5%
-5.5% vs TC avg
§103
29.1%
-10.9% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered. This action is responsive to the Application/amendment filed on 12/10/2025. Claims 1-20 are pending in the case. Claims 1, 8, and 15 are independent claims. Claims 1, 8, and 15 are amended. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/03/2025 and 09/22/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the invention is directed towards abstract idea(s) without significantly more. Regarding claim 1: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites …generate content-irrelevant latent code comprising a first compressed version of the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information. See 2106.04.(a)(2).III.C. The claim recites …generate domain-irrelevant latent code comprising a second compressed version of the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information. See 2106.04.(a)(2).III.C. The claim recites … generate reconstructed sampled runtime input data; wherein the reconstructed sampled runtime input data comprises a reconstruction of the sampled runtime input data based at least in part on the content-irrelevant latent code and the domain-irrelevant latent code which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user using multiple lists of information to create a set of information, wherein those lists of information were creating from the same source. See 2106.04.(a)(2).III.C. The claim recites generating a reconstruction loss based at least in part on the reconstructed sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a person creating a value that based on set of information. See 2106.04.(a)(2).III.C. The claim recites using the reconstruction loss to determine that the sampled runtime input data comprises an anomalous data candidate which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a person correlating a value to a set and deciding whether a value is too high or low. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: receiving…runtime input data generated by a sensor network that senses runtime states of the SUA(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) at a data collection system (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the SUA comprises an operational component and a dynamic environment in which the operational component performs runtime tasks, wherein the dynamic environment comprises an unknown domain (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the runtime input data comprises content data and domain data(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the content data is sourced from the operational component while the operational component performs the runtime tasks(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) wherein the domain data is sourced from the dynamic environment in which the operational component performs the runtime tasks(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) wherein the data collection system comprises a neural network trained to perform an anomalous data detection task on the runtime input data(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streamlining stream of the runtime input data during operation of the operational component(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) and using the neural network of the data collection system to perform iterations of the anomalous data detection task(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) receiving sampled runtime input data comprising a sample of the runtime input data(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) Using a first encoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Using a second encoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Using a decoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) operational component performs the runtime tasks(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) (e) (f) and (j) obtaining a network input is well understood, routine, and conventional activity of “transmitting or receiving data over a network" (see MPEP 2106.05(d)(II)(i) using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 ). Additional elements (b) (c) (d) (g) and (h) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). Additional elements (i) (k) (l) (m) and (n) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) and (n) in the claim do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 2: The rejection of claim 1 is incorporated and further claim recites further additional elements/limitations: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites generating the content-irrelevant latent code comprises identifying similarities and differences among the content-irrelevant latent code which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information where that list is based on perceived similarities and differences of the set. See 2106.04.(a)(2).III.C. The claim recites generating the domain-irrelevant latent code comprises identifying similarities and differences among the domain-irrelevant latent code which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information where that list is based on perceived similarities and differences of the set. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in Claim 2 do not integrate the abstract idea into a practical application. Specifically the claim lists the additional elements: first encoder stage (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) second encoder stage(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) and (b) in Claim 2 do/does not include any additional elements, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 3: The rejection of claim 1 is incorporated and further claim recites further additional elements/limitations: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites generate content-irrelevant and domain- irrelevant (CIDI) latent code from the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in Claim 3 do not integrate the abstract idea into a practical application. Specifically the claim lists the additional elements: the first encoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in Claim 3 do/does not include any additional elements, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 4: The rejection of claim 3 is incorporated and further claim recites further additional elements/limitations: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites wherein the reconstruction of the sampled runtime input data is also based at least in part on the CIDI latent code which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The claim does not contain elements that would warrant a Step 2A Prong 2 analysis. Subject Matter Eligibility Analysis Step 2B: Claim 4 does not include any additional elements, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible. Regarding claim 5: The rejection of claim 1 is incorporated and further claim recites further additional elements/limitations: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites disentangle the content-irrelevant code from the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user deciding and separating elements in a list. See 2106.04.(a)(2).III.C. The claim recites disentangle the domain-irrelevant code from the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user deciding and separating elements in a list. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in Claim 5 do not integrate the abstract idea into a practical application. Specifically the claim lists the additional elements: wherein the neural network has been trained to (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) in Claim 5 do/does not include any additional elements, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 6: The rejection of claim 5 is incorporated and further claim recites further additional elements/limitations: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites disentangle the content-irrelevant code from the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user deciding and separating elements in a list. See 2106.04.(a)(2).III.C. The claim recites disentangle the domain-irrelevant code from the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user deciding and separating elements in a list. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in Claim 6 do not integrate the abstract idea into a practical application. Specifically the claim lists the additional elements: adversarial content discriminator has been used to train the neural network (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) adversarial domain discriminator has been used to train the neural network (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) and (b) in Claim 6 do/does not include any additional elements, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 7: The rejection of claim 2 is incorporated and further claim recites further additional elements/limitations: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites identify similarities and differences among the content-irrelevant latent code which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information where that list is based on perceived similarities and differences of the set. See 2106.04.(a)(2).III.C. The claim recites identify similarities and differences among the domain-irrelevant latent code which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information where that list is based on perceived similarities and differences of the set. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: The additional elements recited in Claim 7 do not integrate the abstract idea into a practical application. Specifically the claim lists the additional elements: the first encoder stage of the neural network has been trained (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) the second encoder stage of the neural network has been trained (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) and (b) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). The additional element(s) (a) and (b) in Claim 7 do/does not include any additional elements, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding claim 8: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites …generate content-irrelevant latent code comprising a first compressed version of the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information. See 2106.04.(a)(2).III.C. The claim recites …generate domain-irrelevant latent code comprising a second compressed version of the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information. See 2106.04.(a)(2).III.C. The claim recites … generate reconstructed sampled runtime input data; wherein the reconstructed sampled runtime input data comprises a reconstruction of the sampled runtime input data based at least in part on the content-irrelevant latent code and the domain-irrelevant latent code which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user using multiple lists of information to create a set of information, wherein those lists of information were creating from the same source. See 2106.04.(a)(2).III.C. The claim recites generating a reconstruction loss based at least in part on the reconstructed sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a person creating a value that based on set of information. See 2106.04.(a)(2).III.C. The claim recites using the reconstruction loss to determine that the sampled runtime input data comprises an anomalous data candidate which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a person correlating a value to a set and deciding whether a value is too high or low. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: receiving…runtime input data generated by a sensor network that senses runtime states of the SUA(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) at a data collection system (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the SUA comprises an operational component and a dynamic environment in which the operational component performs runtime tasks, wherein the dynamic environment comprises an unknown domain(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the runtime input data comprises content data and domain data(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the content data is sourced from the operational component while the operational component performs the runtime tasks(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) wherein the domain data is sourced from the dynamic environment in which the operational component performs the runtime tasks(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) wherein the data collection system comprises a neural network trained to perform an anomalous data detection task on the runtime input data(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streamlining stream of the runtime input data during operation of the operational component(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) and using the neural network of the data collection system to perform iterations of the anomalous data detection task(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) receiving sampled runtime input data comprising a sample of the runtime input data(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) Using a first encoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Using a second encoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Using a decoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) a memory and a processor(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) operational component performs the runtime tasks(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) (e) (f) and (j) obtaining a network input is well understood, routine, and conventional activity of “transmitting or receiving data over a network" (see MPEP 2106.05(d)(II)(i) using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 ). Additional elements (b) (c) (d) (g) and (h) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). Additional elements (i) (k) (l) (m) (n) and (o) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) and (o) in the claim do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding Claims 9-14: Claims 9-14 are rejected under that same 101 claim analysis as claims 2-7 due to the substantially similar limitations and elements in claims 2-7 respectively. Regarding claim 15: Subject Matter Eligibility Analysis Step 2A Prong 1: The claim recites …generate content-irrelevant latent code comprising a first compressed version of the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information. See 2106.04.(a)(2).III.C. The claim recites …generate domain-irrelevant latent code comprising a second compressed version of the sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a list of information based on a set of information. See 2106.04.(a)(2).III.C. The claim recites … generate reconstructed sampled runtime input data; wherein the reconstructed sampled runtime input data comprises a reconstruction of the sampled runtime input data based at least in part on the content-irrelevant latent code and the domain-irrelevant latent code which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user using multiple lists of information to create a set of information, wherein those lists of information were creating from the same source. See 2106.04.(a)(2).III.C. The claim recites generating a reconstruction loss based at least in part on the reconstructed sampled runtime input data which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a person creating a value that based on set of information. See 2106.04.(a)(2).III.C. The claim recites using the reconstruction loss to determine that the sampled runtime input data comprises an anomalous data candidate which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a person correlating a value to a set and deciding whether a value is too high or low. See 2106.04.(a)(2).III.C. Subject Matter Eligibility Analysis Step 2A Prong 2: receiving…runtime input data generated by a sensor network that senses runtime states of the SUA(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) at a data collection system (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the SUA comprises an operational component and a dynamic environment in which the operational component performs runtime tasks, wherein the dynamic environment comprises an unknown domain (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the runtime input data comprises content data and domain data(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the content data is sourced from the operational component while the operational component performs the runtime tasks(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) wherein the domain data is sourced from the dynamic environment in which the operational component performs the runtime tasks(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) wherein the data collection system comprises a neural network trained to perform an anomalous data detection task on the runtime input data(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streamlining stream of the runtime input data during operation of the operational component(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h))) and using the neural network of the data collection system to perform iterations of the anomalous data detection task(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) receiving sampled runtime input data comprising a sample of the runtime input data(recites insignificant extra-solution activity of data gathering (see MPEP 2106.05(g))) Using a first encoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Using a second encoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Using a decoder stage of the neural network to(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) a computer program product … a computer readable storage medium(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) operational component performs the runtime tasks(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f))) Subject Matter Eligibility Analysis Step 2B: Additional elements (a) (e) (f) and (j) obtaining a network input is well understood, routine, and conventional activity of “transmitting or receiving data over a network" (see MPEP 2106.05(d)(II)(i) using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 ). Additional elements (b) (c) (d) (g) and (h) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). Additional elements (i) (k) (l) (m) (n) and (o) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f). The additional element(s) (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) and (o) in the claim do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible. Regarding Claims 16-20: Claims 16-18 are rejected under that same 101 claim analysis as claims 2-4 due to the substantially similar limitations and elements in claims 2-4 respectively. Claim 19 are rejected under that same 101 claim analysis as claims 5-6 due to the substantially similar limitations and elements in claims 5-6. Claims 20 are rejected under that same 101 claim analysis as claim 7 due to the substantially similar limitations and elements in claim 7 respectively. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al.( “Dual Encoder-Decoder Based Generative Adversarial Networks for Disentangled Facial Representation Learning”, henceforth known as Hu) and further in view of Liu et al.(US20200265219A1, “DISENTANGLED REPRESENTATION LEARNING GENERATIVE ADVERSARIAL NETWORK FOR POSE-INVARIANT FACE RECOGNITION”, henceforth known as Liu). Regarding claim 1, Hu discloses receiving, at a data collection system(where the data collection system is considered computer as it is a system that collects and distributes data and DED-GAN is a computer based system), runtime input data(Hu, Page 6, Col 2, Paragraph 1, “Our models were trained separately on the Multi-PIE…and CASIA…dataset” where the images and the image dataset are considered runtime input data as they are contextual data being input into the GAN at runtime) generated by a sensor network that senses runtime states of the SUA(Hu, Page 6, Col 2, Paragraph 1, “Our models were trained separately on the Multi-PIE…and CASIA…dataset” where the image datasets have images that are created by sensors that are considered a sensor network and the images sense features of the image(pose, color, saturation, identity, image/video steam, lighting etc.…) are considered runtime states) Hu discloses wherein the runtime input data comprises content data and domain data(Hu, Page 4, Col. 2, Paragraph 2 “Given a face image x with label y = {ya, yd, yc}, where ya, yd and yc represent the labels for real/fake, identity and pose” where pose is considered content data and identity is considered domain data) Hu discloses wherein the data collection system comprises a neural network trained to perform an anomalous data detection task on the runtime input data(Hu, Page 6, Col. 1, Paragraph 4, “The code layer of the autoencoder is followed by Da, Dc and Dd where Da(x) is for real-fake classification, Dc(x) is for pose regression and Dd is for identity prediction” where the process of the GAN classification and prediction of images as real/fake is considered to be detecting anomalous data of input data) Hu discloses and using the neural network of the data collection system to perform iterations of the anomalous data detection task(Hu, Page 6, Col. 2, Algorithm 1, iteration t, where iteration t shows that the GAN operates in iteration) Hu discloses receiving sampled runtime input data comprising a sample of the runtime input data(Hu, Page 4, Col 2, Paragraph 2, “…Given a face image x…” where the given face image x is considered an input data being received) Hu discloses using a first encoder stage of a the neural network to generate content-irrelevant latent code comprising a first compressed version of the sampled runtime input data and using a second encoder stage of the neural network to generate domain-irrelevant latent code comprising a second compressed version of the sampled runtime input data(Hu, Page 4, Col 2, Paragraph 1, “The generator generates unlabelled realistic samples from the latent variable model to improve the discriminative ability of the discriminator” where the latent variable model contains pose and facial identity which are considered content-irrelevant latent code and domain-irrelevant latent code from input data (Hu, Page 5, Col 1, Paragraph 3,“The encoder-decoder structured generator is used for face rotation and untangling the identity from pose variation”) and the output is considered a compressed version of the input. Further, the encoder for the generator/discriminator is considered to be both the first and second encoder stage of the claims as it performs the function of both he first and second encoder) Hu discloses using a decoder stage of the neural network to generate reconstructed sampled runtime input data; (Hu, Page 5, Col. 1, Paragraph 3, “The representation is one part of the input to the decoder to synthesise various faces of the same subject with different attributes, i.e., by virtually rotating the facial pose code” where Hu’s use of a decoder to create images with the same subject and different attributes is considered using a decoder stage of the neural network to generate reconstructed input data which corresponds to reconstructed sampled runtime input data) Hu discloses wherein the reconstructed sampled runtime input data comprises a reconstruction of the sampled runtime input data based at least in part on the content-irrelevant latent code and the domain-irrelevant latent code(Hu, Page 4, Figure 1d shows the latent code from the generator’s encoder is sent to the generator’s decoder) Hu discloses generating a reconstruction loss based at least in part on the reconstructed sampled runtime input data(Algorithm 1, where LDpixel includes the reconstruction error of synthetic images(See also Hu, Page 5, Col. 1, Paragraph 4, “The discriminator aims to classify the face image x as real or fake, to maximise the gap between the reconstruction error of real image and that of the synthetic image, and to estimate its identity and pose” where maximizing the gap between the reconstruction error for real images compared to synthetic images corresponds to a reconstruction loss based on reconstructed input data and Hu, Equation 7, Equation 13 and Equation 18, that shows the reconstruction loss by comparing the generated image to the real image in terms of pixel values)) Hu discloses and using the reconstruction loss to determine that the sampled runtime input data comprises an anomalous data candidate(Algorithm 1, Step 3 and 6(See also, Hu, Page 5, Equation 9 and 13, where a loss function being significantly higher than expected would indicate an outlier and would be deemed fake/anomalous and Hu, Page 4, Figure 1d, where the real/fake is determining outlier/anomaly)) While Hu does not explicitly disclose: receiving, at a data collection system, runtime input data generated by a sensor network that senses runtime states of the SUA… wherein the SUA comprises an operational component, and a dynamic environment in which the operational component performs runtime task…wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streaming stream of the runtime input data during operation of the operational component wherein the runtime input data comprises content data and domain data, wherein the content data is sourced from the operational component while the operational component performs the runtime tasks and wherein the domain data is sourced from the environment in which the operational component performs the runtime tasks Liu discloses receiving, at a data collection system, runtime input data generated by a sensor network that senses runtime states of the SUA(Liu, Figure 1 and [0037],” As shown in FIG. 1, the system 100 may communicate with one or more image capture device(s). In general, the system 100 may be any device, apparatus or system configured for carrying out instructions for, and may operate as part of, or in collaboration with, various computers, systems, devices, machines, mainframes, networks or servers.” where the data collection system is considered computer as it is a system that collects and distributes data and the videos captured by the image capture devices are considered runtime input data as it creates data by sensors that sense runtime states (pose, color, saturation, identity, image/video steam, lighting etc.…) that is input into the GAN)…wherein the SUA comprises an operational component, and a dynamic environment in which the operational component performs runtime task(Liu, Figure 1 and [0037],” As shown in FIG. 1, the system 100 may communicate with one or more image capture device(s). In general, the system 100 may be any device, apparatus or system configured for carrying out instructions for, and may operate as part of, or in collaboration with, various computers, systems, devices, machines, mainframes, networks or servers” where the image capture devices are considered an operational component that performs a runtime task of capturing videos takes a video of a dynamic environment), wherein the dynamic environment comprises an unknown domain(Liu, [0061] “With a single-image DR GAN, an identity representation f(x) can be extracted from a single image x, and different faces of the same person, in any pose, can be generated. In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions” where identifying representation f(x) of an extracted image is considered performing a runtime task in a unknown dynamic environment )…wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streaming stream of the runtime input data during operation of the operational component(Liu, [0061], “In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions.” where the input is a video captured by a camera/a video feed is considered a streaming time-series of sensor measurements during operation of the operational component as the camera/video corresponds to an operational component) Liu discloses wherein the runtime input data comprises content data and domain data(Liu, [0047], “Then, at process block 204, a trained DR-GAN may be applied to generate an identity representation of the subject, or object. This step may include extracting the identity representation, in the form of features or feature vectors, by inputting received one or more images into one or more encoders of the DR-GAN. In some aspects, a pose of the subject or object in the received image(s) may be determined at process block 204.” where the pose in the input data is considered content data and identity in the input data is considered domain data(See also Liu, [0032], “While G in the present framework serves as a face rotator, D may be trained to…predict face identity and pose at substantially the same time”)), wherein the content data is sourced from the operational component while the operational component performs the runtime tasks and wherein the domain data is sourced from the environment in which the operational component performs the runtime tasks(Liu, [0046], “As shown, the process 200 may begin at process block 202 with providing images depicting at least one subject to be identified. The imaging may include single or multiple images acquired, for example, using various monitoring devices or cameras” where the monitoring devices/cameras are considered operational components that perform that runtime task of capturing videos that used as a source to input into the GAN) References Hu and Liu are analogous art because they are from the same field of endeavor of using machine learning (GANs) for image recognition. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Hu and Liu before him or her, to modify the offline image input of Hu to include the steaming video feeding inputs of Liu for multiple views of the same subject and realistic input modality for law enforcement/biometric applications Liu states “…In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions.” (Liu, [0061]) and “Nevertheless, the ability to generate realistic frontal faces and accurately recognize subjects would be beneficial in many biometric applications, including identifying suspects or witnesses in law enforcement.”(Liu, [0005]) Regarding claim 2, Hu-Liu teaches the computer-implemented method of claim 1 (and thus the rejection of claim 1 is incorporated). Hu discloses a first encoder stage generating the content-irrelevant latent code comprises identifying similarities and differences among the content-irrelevant latent code and the second encoder stage generating the domain-irrelevant latent code comprises identifying similarities and differences among the domain-irrelevant latent code (Hu, Page 4, Col. 1, Paragraph 1, “The encoder-decoder structured generator is used for face rotation and untangling the identity from pose variation” where the generator untangling the identity from the pose is considered identifying similarities and differences among content-irrelevant latent code and domain-irrelevant latent code) Regarding claim 3, Hu-Liu teaches the computer-implemented method of claim 1 (and thus the rejection of claim 1 is incorporated). Hu discloses wherein the anomalous data detection task further comprises using the first encoder stage of the neural network to generate content-irrelevant and domain-irrelevant (CIDI) latent code from the sampled runtime input data. (Hu, Page 4, Col. 1, Paragraph 1, “The encoder-decoder structured generator is used for face rotation and untangling the identity from pose variation” where the generator untangling the identity from the pose is considered identifying similarities and differences among content-irrelevant latent code and domain-irrelevant latent code) Regarding claim 4, Hu-Liu teaches the computer-implemented method of claim 3 (and thus the rejection of claim 3 is incorporated). Hu discloses wherein the reconstruction of the sampled runtime input data is also based at least in part on the CIDI latent code (Figure 1d, the latent code from the generator’s encoder is considered to be contain pose and identity information which is considered content-irrelevant and domain-irrelevant latent code) Regarding claim 5, Hu-Liu teaches the computer-implemented method of claim 1 (and thus the rejection of claim 1 is incorporated). Hu discloses wherein the neural network has been trained to: disentangle the content-irrelevant code from the sampled runtime input data; and disentangle the domain-irrelevant code from the sampled runtime input data(Hu, Page 2, Col 2, Paragraph 2, “This paper addresses the problem of learning a generative model for disentangled facial representation extraction. By combining the advanced techniques of GAN-based representation learning methods, we propose to learn disentangled pose-robust features by modeling the complex non-linear transform between face images with different poses” where untangling the identity from the pose is considered disentangle the content-irrelevant code from the input data and disentangle the domain-irrelevant code from the input data) Regarding claim 6, Hu-Liu teaches the computer-implemented method of claim 5 (and thus the rejection of claim 5 is incorporated) Hu discloses an adversarial content discriminator has been used to train the neural network to disentangle the content-irrelevant code from the sampled runtime input data; and an adversarial domain discriminator has been used to train the neural network to disentangle the domain-irrelevant code from the sampled runtime input data. (Hu, Page 4, Col. 2, Paragraph 1, “… a discriminative model, D, is trained to distinguish the samples synthesised by G and real ones from the training data” where the discriminator being trained to synthesize and distinguish images is considered to being trained to disentangle pose and identity which are considered content-irrelevant latent/domain-irrelevant latent code (Hu, Page 4, Col. 1. Paragraph 1, “The encoder-decoder structured discriminator is used for facial reconstruction, pose estimation, identity classification and real/fake adversarial learning”)) Regarding claim 7, Hu-Liu teaches the computer-implemented method of claim 2 (and thus the rejection of claim 2 is incorporated). Hu discloses the first encoder stage of the neural network has been trained to identify similarities and differences among the content-irrelevant latent code and the second encoder stage of the neural network has been trained to identify similarities and differences among the domain-irrelevant latent code. (Hu, Page 4, Col. 2, Paragraph 1, “a generative model, G, is trained to synthesise images resembling the real data … from the training data” where the generator being trained to synthesize and distinguish images of samples is considered to being trained to disentangle pose and identity which are considered content-irrelevant latent/domain-irrelevant latent code (Hu, Page 4, Col. 2, Paragraph 1, “The encoder-decoder structured generator is used for face rotation and untangling the identity from pose variation”)) Regarding claim 8, Hu-Liu discloses a computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor system comprising a data collection system(Hu, Page 6, Col. 2, Algorithm 1, where algorithm 1 is considered a computer implemented method that detects real/fake images (anomalous data) and is associated with a system under analysis as it is a system that is doing an analysis on images) Hu discloses receiving, at a data collection system(where the data collection system is considered computer as it is a system that collects and distributes data and DED-GAN is a computer based system), runtime input data(Hu, Page 6, Col 2, Paragraph 1, “Our models were trained separately on the Multi-PIE…and CASIA…dataset” where the images and the image dataset are considered runtime input data as they are contextual data being input into the GAN at runtime) generated by a sensor network that senses runtime states of the SUA(Hu, Page 6, Col 2, Paragraph 1, “Our models were trained separately on the Multi-PIE…and CASIA…dataset” where the image datasets have images that are created by sensors that are considered a sensor network and the images sense features of the image(pose, color, saturation, identity, image/video steam, lighting etc.…) are considered runtime states) Hu discloses wherein the runtime input data comprises content data and domain data(Hu, Page 4, Col. 2, Paragraph 2 “Given a face image x with label y = {ya, yd, yc}, where ya, yd and yc represent the labels for real/fake, identity and pose” where pose is considered content data and identity is considered domain data) Hu discloses wherein the data collection system comprises a neural network trained to perform an anomalous data detection task on the runtime input data(Hu, Page 6, Col. 1, Paragraph 4, “The code layer of the autoencoder is followed by Da, Dc and Dd where Da(x) is for real-fake classification, Dc(x) is for pose regression and Dd is for identity prediction” where the process of the GAN classification and prediction of images as real/fake is considered to be detecting anomalous data of input data) Hu discloses and using the neural network of the data collection system to perform iterations of the anomalous data detection task(Hu, Page 6, Col. 2, Algorithm 1, iteration t, where iteration t shows that the GAN operates in iteration) Hu discloses receiving sampled runtime input data comprising a sample of the runtime input data(Hu, Page 4, Col 2, Paragraph 2, “…Given a face image x…” where the given face image x is considered an input data being received) Hu discloses using a first encoder stage of a the neural network to generate content-irrelevant latent code comprising a first compressed version of the sampled runtime input data and using a second encoder stage of the neural network to generate domain-irrelevant latent code comprising a second compressed version of the sampled runtime input data(Hu, Page 4, Col 2, Paragraph 1, “The generator generates unlabelled realistic samples from the latent variable model to improve the discriminative ability of the discriminator” where the latent variable model contains pose and facial identity which are considered content-irrelevant latent code and domain-irrelevant latent code from input data (Hu, Page 5, Col 1, Paragraph 3,“The encoder-decoder structured generator is used for face rotation and untangling the identity from pose variation”) and the output is considered a compressed version of the input. Further, the encoder for the generator/discriminator is considered to be both the first and second encoder stage of the claims as it performs the function of both he first and second encoder) Hu discloses using a decoder stage of the neural network to generate reconstructed sampled runtime input data; (Hu, Page 5, Col. 1, Paragraph 3, “The representation is one part of the input to the decoder to synthesise various faces of the same subject with different attributes, i.e., by virtually rotating the facial pose code” where Hu’s use of a decoder to create images with the same subject and different attributes is considered using a decoder stage of the neural network to generate reconstructed input data which corresponds to reconstructed sampled runtime input data) Hu discloses wherein the reconstructed sampled runtime input data comprises a reconstruction of the sampled runtime input data based at least in part on the content-irrelevant latent code and the domain-irrelevant latent code(Hu, Page 4, Figure 1d shows the latent code from the generator’s encoder is sent to the generator’s decoder) Hu discloses generating a reconstruction loss based at least in part on the reconstructed sampled runtime input data(Algorithm 1, where LDpixel includes the reconstruction error of synthetic images(See also Hu, Page 5, Col. 1, Paragraph 4, “The discriminator aims to classify the face image x as real or fake, to maximise the gap between the reconstruction error of real image and that of the synthetic image, and to estimate its identity and pose” where maximizing the gap between the reconstruction error for real images compared to synthetic images corresponds to a reconstruction loss based on reconstructed input data and Hu, Equation 7, Equation 13 and Equation 18, that shows the reconstruction loss by comparing the generated image to the real image in terms of pixel values)) Hu discloses and using the reconstruction loss to determine that the sampled runtime input data comprises an anomalous data candidate(Algorithm 1, Step 3 and 6(See also, Hu, Page 5, Equation 9 and 13, where a loss function being significantly higher than expected would indicate an outlier and would be deemed fake/anomalous and Hu, Page 4, Figure 1d, where the real/fake is determining outlier/anomaly)) While Hu does not explicitly disclose: receiving, at a data collection system, runtime input data generated by a sensor network that senses runtime states of the SUA… wherein the SUA comprises an operational component, and a dynamic environment in which the operational component performs runtime task…wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streaming stream of the runtime input data during operation of the operational component wherein the runtime input data comprises content data and domain data, wherein the content data is sourced from the operational component while the operational component performs the runtime tasks and wherein the domain data is sourced from the environment in which the operational component performs the runtime tasks Liu discloses receiving, at a data collection system, runtime input data generated by a sensor network that senses runtime states of the SUA(Liu, Figure 1 and [0037],” As shown in FIG. 1, the system 100 may communicate with one or more image capture device(s). In general, the system 100 may be any device, apparatus or system configured for carrying out instructions for, and may operate as part of, or in collaboration with, various computers, systems, devices, machines, mainframes, networks or servers.” where the data collection system is considered computer as it is a system that collects and distributes data and the videos captured by the image capture devices are considered runtime input data as it creates data by sensors that sense runtime states (pose, color, saturation, identity, image/video steam, lighting etc.…) that is input into the GAN)… wherein the SUA comprises an operational component, and a dynamic environment in which the operational component performs runtime task(Liu, Figure 1 and [0037],” As shown in FIG. 1, the system 100 may communicate with one or more image capture device(s). In general, the system 100 may be any device, apparatus or system configured for carrying out instructions for, and may operate as part of, or in collaboration with, various computers, systems, devices, machines, mainframes, networks or servers” where the image capture devices are considered an operational component that performs a runtime task of capturing videos takes a video of a dynamic environment), wherein the dynamic environment comprises an unknown domain(Liu, [0061] “With a single-image DR GAN, an identity representation f(x) can be extracted from a single image x, and different faces of the same person, in any pose, can be generated. In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions” where identifying representation f(x) of an extracted image is considered performing a runtime task in a unknown dynamic environment )…wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streaming stream of the runtime input data during operation of the operational component(Liu, [0061], “In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions.” where the input is a video captured by a camera/a video feed is considered a streaming time-series of sensor measurements during operation of the operational component as the camera/video corresponds to an operational component) Liu discloses wherein the runtime input data comprises content data and domain data(Liu, [0047], “Then, at process block 204, a trained DR-GAN may be applied to generate an identity representation of the subject, or object. This step may include extracting the identity representation, in the form of features or feature vectors, by inputting received one or more images into one or more encoders of the DR-GAN. In some aspects, a pose of the subject or object in the received image(s) may be determined at process block 204.” where the pose in the input data is considered content data and identity in the input data is considered domain data(See also Liu, [0032], “While G in the present framework serves as a face rotator, D may be trained to…predict face identity and pose at substantially the same time”)), wherein the content data is sourced from the operational component while the operational component performs the runtime tasks and wherein the domain data is sourced from the environment in which the operational component performs the runtime tasks(Liu, [0046], “As shown, the process 200 may begin at process block 202 with providing images depicting at least one subject to be identified. The imaging may include single or multiple images acquired, for example, using various monitoring devices or cameras” where the monitoring devices/cameras are considered operational components that perform that runtime task of capturing videos that used as a source to input into the GAN) References Hu and Liu are analogous art because they are from the same field of endeavor of using machine learning (GANs) for image recognition. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Hu and Liu before him or her, to modify the offline image input of Hu to include the steaming video feeding inputs of Liu for multiple views of the same subject and realistic input modality for law enforcement/biometric applications Liu states “…In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions.” (Liu, [0061]) and “Nevertheless, the ability to generate realistic frontal faces and accurately recognize subjects would be beneficial in many biometric applications, including identifying suspects or witnesses in law enforcement.”(Liu, [0005]) Regarding claims 9-14, The rejection of claim 8 is incorporated in claims 9-14; further, claims 9-14 are rejected under the same rationale as set forth in the rejection of claims 2-7 respectively due to the substantially similar limitations. Regarding claim 15, Hu discloses a computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor system comprising a data collection system (Hu, Page 6, Col. 2, Algorithm 1, where algorithm 1 is considered a computer implemented method that detects real/fake images (anomalous data) and is associated with a system under analysis as it is a system that is doing an analysis on images) Hu discloses receiving, at a data collection system(where the data collection system is considered computer as it is a system that collects and distributes data and DED-GAN is a computer based system), runtime input data(Hu, Page 6, Col 2, Paragraph 1, “Our models were trained separately on the Multi-PIE…and CASIA…dataset” where the images and the image dataset are considered runtime input data as they are contextual data being input into the GAN at runtime) generated by a sensor network that senses runtime states of the SUA(Hu, Page 6, Col 2, Paragraph 1, “Our models were trained separately on the Multi-PIE…and CASIA…dataset” where the image datasets have images that are created by sensors that are considered a sensor network and the images sense features of the image(pose, color, saturation, identity, image/video steam, lighting etc.…) are considered runtime states) Hu discloses wherein the runtime input data comprises content data and domain data(Hu, Page 4, Col. 2, Paragraph 2 “Given a face image x with label y = {ya, yd, yc}, where ya, yd and yc represent the labels for real/fake, identity and pose” where pose is considered content data and identity is considered domain data) Hu discloses wherein the data collection system comprises a neural network trained to perform an anomalous data detection task on the runtime input data(Hu, Page 6, Col. 1, Paragraph 4, “The code layer of the autoencoder is followed by Da, Dc and Dd where Da(x) is for real-fake classification, Dc(x) is for pose regression and Dd is for identity prediction” where the process of the GAN classification and prediction of images as real/fake is considered to be detecting anomalous data of input data) Hu discloses and using the neural network of the data collection system to perform iterations of the anomalous data detection task(Hu, Page 6, Col. 2, Algorithm 1, iteration t, where iteration t shows that the GAN operates in iteration) Hu discloses receiving sampled runtime input data comprising a sample of the runtime input data(Hu, Page 4, Col 2, Paragraph 2, “…Given a face image x…” where the given face image x is considered an input data being received) Hu discloses using a first encoder stage of a the neural network to generate content-irrelevant latent code comprising a first compressed version of the sampled runtime input data and using a second encoder stage of the neural network to generate domain-irrelevant latent code comprising a second compressed version of the sampled runtime input data(Hu, Page 4, Col 2, Paragraph 1, “The generator generates unlabelled realistic samples from the latent variable model to improve the discriminative ability of the discriminator” where the latent variable model contains pose and facial identity which are considered content-irrelevant latent code and domain-irrelevant latent code from input data (Hu, Page 5, Col 1, Paragraph 3,“The encoder-decoder structured generator is used for face rotation and untangling the identity from pose variation”) and the output is considered a compressed version of the input. Further, the encoder for the generator/discriminator is considered to be both the first and second encoder stage of the claims as it performs the function of both he first and second encoder) Hu discloses using a decoder stage of the neural network to generate reconstructed sampled runtime input data; (Hu, Page 5, Col. 1, Paragraph 3, “The representation is one part of the input to the decoder to synthesise various faces of the same subject with different attributes, i.e., by virtually rotating the facial pose code” where Hu’s use of a decoder to create images with the same subject and different attributes is considered using a decoder stage of the neural network to generate reconstructed input data which corresponds to reconstructed sampled runtime input data) Hu discloses wherein the reconstructed sampled runtime input data comprises a reconstruction of the sampled runtime input data based at least in part on the content-irrelevant latent code and the domain-irrelevant latent code(Hu, Page 4, Figure 1d shows the latent code from the generator’s encoder is sent to the generator’s decoder) Hu discloses generating a reconstruction loss based at least in part on the reconstructed sampled runtime input data(Algorithm 1, where LDpixel includes the reconstruction error of synthetic images(See also Hu, Page 5, Col. 1, Paragraph 4, “The discriminator aims to classify the face image x as real or fake, to maximise the gap between the reconstruction error of real image and that of the synthetic image, and to estimate its identity and pose” where maximizing the gap between the reconstruction error for real images compared to synthetic images corresponds to a reconstruction loss based on reconstructed input data and Hu, Equation 7, Equation 13 and Equation 18, that shows the reconstruction loss by comparing the generated image to the real image in terms of pixel values)) Hu discloses and using the reconstruction loss to determine that the sampled runtime input data comprises an anomalous data candidate(Algorithm 1, Step 3 and 6(See also, Hu, Page 5, Equation 9 and 13, where a loss function being significantly higher than expected would indicate an outlier and would be deemed fake/anomalous and Hu, Page 4, Figure 1d, where the real/fake is determining outlier/anomaly)) While Hu does not explicitly disclose: receiving, at a data collection system, runtime input data generated by a sensor network that senses runtime states of the SUA… wherein the SUA comprises an operational component, and a dynamic environment in which the operational component performs runtime task…wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streaming stream of the runtime input data during operation of the operational component wherein the runtime input data comprises content data and domain data, wherein the content data is sourced from the operational component while the operational component performs the runtime tasks and wherein the domain data is sourced from the environment in which the operational component performs the runtime tasks Liu discloses receiving, at a data collection system, runtime input data generated by a sensor network that senses runtime states of the SUA(Liu, Figure 1 and [0037],” As shown in FIG. 1, the system 100 may communicate with one or more image capture device(s). In general, the system 100 may be any device, apparatus or system configured for carrying out instructions for, and may operate as part of, or in collaboration with, various computers, systems, devices, machines, mainframes, networks or servers.” where the data collection system is considered computer as it is a system that collects and distributes data and the videos captured by the image capture devices are considered runtime input data as it creates data by sensors that sense runtime states (pose, color, saturation, identity, image/video steam, lighting etc.…) that is input into the GAN)… wherein the SUA comprises an operational component, and a dynamic environment in which the operational component performs runtime task(Liu, Figure 1 and [0037],” As shown in FIG. 1, the system 100 may communicate with one or more image capture device(s). In general, the system 100 may be any device, apparatus or system configured for carrying out instructions for, and may operate as part of, or in collaboration with, various computers, systems, devices, machines, mainframes, networks or servers” where the image capture devices are considered an operational component that performs a runtime task of capturing videos takes a video of a dynamic environment), wherein the dynamic environment comprises an unknown domain(Liu, [0061] “With a single-image DR GAN, an identity representation f(x) can be extracted from a single image x, and different faces of the same person, in any pose, can be generated. In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions” where identifying representation f(x) of an extracted image is considered performing a runtime task in a unknown dynamic environment )…wherein the runtime input data comprises streaming time-series sensor measurements captured as a continuous or streaming stream of the runtime input data during operation of the operational component(Liu, [0061], “In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions.” where the input is a video captured by a camera/a video feed is considered a streaming time-series of sensor measurements during operation of the operational component as the camera/video corresponds to an operational component) Liu discloses wherein the runtime input data comprises content data and domain data(Liu, [0047], “Then, at process block 204, a trained DR-GAN may be applied to generate an identity representation of the subject, or object. This step may include extracting the identity representation, in the form of features or feature vectors, by inputting received one or more images into one or more encoders of the DR-GAN. In some aspects, a pose of the subject or object in the received image(s) may be determined at process block 204.” where the pose in the input data is considered content data and identity in the input data is considered domain data(See also Liu, [0032], “While G in the present framework serves as a face rotator, D may be trained to…predict face identity and pose at substantially the same time”)), wherein the content data is sourced from the operational component while the operational component performs the runtime tasks and wherein the domain data is sourced from the environment in which the operational component performs the runtime tasks(Liu, [0046], “As shown, the process 200 may begin at process block 202 with providing images depicting at least one subject to be identified. The imaging may include single or multiple images acquired, for example, using various monitoring devices or cameras” where the monitoring devices/cameras are considered operational components that perform that runtime task of capturing videos that used as a source to input into the GAN) References Hu and Liu are analogous art because they are from the same field of endeavor of using machine learning (GANs) for image recognition. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Hu and Liu before him or her, to modify the offline image input of Hu to include the steaming video feeding inputs of Liu for multiple views of the same subject and realistic input modality for law enforcement/biometric applications Liu states “…In practice, a number of images may often be available, for instance, from video feeds provided by different cameras capturing a person with different poses, expressions, and under different lighting conditions.” (Liu, [0061]) and “Nevertheless, the ability to generate realistic frontal faces and accurately recognize subjects would be beneficial in many biometric applications, including identifying suspects or witnesses in law enforcement.”(Liu, [0005]) Regarding claims 16-18, The rejection of claim 15 is incorporated in claims 16-18; further, claims 16-18 are rejected under the same rationale as set forth in the rejection of claims 2-4 respectively due to the substantially similar limitations. Regarding claim 19, the rejection of claim 15 is incorporated in claim 19; further, claim 19 are rejected under the same rationale as set forth in the rejection of claims 5-6 due to the substantially similar limitations. Regarding claim 20, the rejection of claim 15 is incorporated in claim 20; further, claim 20 are rejected under the same rationale as set forth in the rejection of claim 7 due to the substantially similar limitations. Response to Arguments Applicant's arguments filed 12/10/2025 have been fully considered but they are not persuasive. A breakdown of 101 and 102/103 arguments can be found below. 101: Applicant appears to argue on page 12 that the amended claim 1 should overcome 101. Specifically, “dynamic environment comprising an unknown domain of the system under analysis”, “a sensor network that senses runtime states of the system under analysis: and “the streamlining time-series sensor measurements captured as a continuous or streaming stream during operation.” Applicant appears to argue that the limitation define a particular machine implementation. Examiner respectfully disagrees as the sensor network that captures streaming information and senses runtime states does not meet the requirements to be considered a particular machine. Particular Machine, MPEP 2106.05(b)(I): MPEP 2106.05(b)(I) requires a particular machine the particularity or generality of the elements of the machine or apparatus, i.e., the degree to which the machine in the claim can be specifically identified (not any and all machines). The limitation is cited at a high level with no/minimal details or identifying features as no specificity of the core laboratory or sensor disposed downhole we given. It is important to note that a general purpose computer that applies a judicial exception, such as an abstract idea, by use of conventional computer functions does not qualify as a particular machine . As no specific details or identifying features were given of the sensor network that can be considered specifically identifiable the limitations does not meet MPEP 2106.5(b)(I) threshold. Particular Machine, 2106.05(b)(II): MPEP 2106.05(b)(II) requires integral use of a machine to achieve performance of a method may integrate the recited judicial exception into a practical application or provide significantly more, in contrast to where the machine is merely an object on which the method operates, which does not integrate the exception into a practical application or provide significantly more. As the focus of the claims are not focused on using the sensor network to obtain data and, instead, the focus of the claims appear focused on working on data received. The limitation is not used as an integral part to achieve performance or recited as performing the abstract idea and does not meet MPEP 2106.5(b)(II) threshold. Particular Machine, 2106.05(b)(III): MPEP 2106.05(b)(III) checks if the limitation involvement is extra-solution activity or a field-of-use, i.e., the extent to which (or how) the machine or apparatus imposes meaningful limits on the claim. Use of a machine that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not integrate a judicial exception or provide significantly more. As cited, the limitations merely describe the source of input data is received from, which does not meet MPEP 2106.5(b)(III) threshold. Applicant appears to argue on pages 13-14 that that a person cannot mentally “receive and process continuous or streamlining time-series sensor measurements captured as a continuous or streaming stream during operation from an operation system”, “perform dual-path encoding to disentangle content and domain factors” or “reconstruct inputs and output and compute then apply reconstruction-loss decisions”. Applicant appears to assert that specifying the input being from a streaming time-series sensor measurement involves compressing high volume and diverse runtime input data, generating a reconstruction loss and analyzing the reconstruction loss to identify anomalous data . Applicant appears to assert that certain limitations cannot be performed mentally as the computational techniques, such as dimensionality reduction, adversarial training and iterative optimization which are beyond the capabilities of the human mind. Examiner respectfully disagrees as performing encoding to disentangle content and domain factors, reconstruct inputs and output and compute then apply reconstruction-loss decisions are all mental or mathematical concepts. As the limitations are stated at a high level and there are no highlighted limitations that would preclude the BRI interpretation outlined in the rejection. More specifically, reconstruct inputs and output, computing and applying a reconstruction-loss as it relates to the claims is “using a decoder stage of the neural network to generate reconstructed ….data….to determine that…data…contains anomalous data…using the neural network to generate CIDI latent code from…data” which is a mental process(“generate reconstructed ….data”, “determine that…data…contains anomalous data” and “generating CIDI latent code”) where the neural network is used as a generic computer to perform abstract ideas, e.g. "apply it on a computer" (see MPEP 2106.05(f))). Further, as receiving runtime input data is categorized as reciting insignificant extra-solution activity of data gathering (see MPEP 2106.05(g) specifying that the runtime input data received comprises streaming time-series sensor measurements captured as a continuous or streamlining stream of the runtime input data during operation of the operational component merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)). Applicant appears to argue on page 14 appears to argue the additional elements integrate into a technological improvement by improving a technological process of anomaly detection in domains, through a specified encoder-decoder architecture operating on live sensor data. Applicant asserts that sensor acquisition and streamlining constraints are not mere field-of-use or insignificant data gathering and instead define a problem context and are integral to the claimed solutions. Examiner respectfully disagrees as the claims as presented do not result or highlight an improvement in neural networks or hardware processors and rather implicitly imply an improvement by using the invention. Examiner notes MPEP 2106.05(a) which provides the requirements for how an improvement to the functioning of a computer or to any other technology or technical field is evaluated. Applicant’s example does not explain how the additional elements reflect this improvement, or how the improvement is affected by any claimed additional elements. At best applicants example describe an improvement provided by the claimed abstract idea of determining that data contains anomalous data. Further claims that require a computer may still recite a mental process (please see MPEP 2106.04(a)(2).III.C). 102: Applicant appears to argue on page 15-16 that Hu’s face image datasets is not comparable to a streaming input data generated by a sensor network(as Hu’s disclosure and experiments are confined to pre collected offline still images as input) or reconstruction loss to detect anomalies in streaming runtime data from an operational system as current amended claims are directed towards a streaming time series sensor measurements captured during operation of an operational component or streaming sensor network monitoring an operational system during runtime as input. Examiner agrees that Hu alone does not teach the amended language of a streaming time series sensor measurements captured during operation of an operational component or streaming sensor network monitoring an operational system during runtime with a reconstruction loss. Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant appears to argue on page 16 that Hu does not disclose iterations of an anomalous data detection task. Examiner respectfully disagrees as training is necessary to perform anomalous data detection, training and therefor iterations of training are considered part of the anomalous data detection task as the detection and prediction of synthetic and real images is an anomalous data detection task. Further, Figure 2 shows epochs(one full pass through the entire dataset) which is made of multiple iterations, where each iteration is the processing of a single batch of data and iterations of anomalous data detection task. Applicant appears to argue on page 16 that comparing cameras and data captured by cameras to a sensor network that senses runtime states of a system under analysis. Examiner respectfully disagrees as there is no limitation highlighted that would preclude the BRI of a sensor network as a camera or the video/image captured as runtime state. Further, there is no language that would preclude the BRI the area captured by a camera as a system under analysis. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES JEFFREY JONES JR whose telephone number is (703)756-1414. The examiner can normally be reached Monday - Friday 8:00 - 5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.J.J./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Apr 22, 2022
Application Filed
Feb 20, 2025
Non-Final Rejection — §101, §103
May 27, 2025
Response Filed
Sep 18, 2025
Final Rejection — §101, §103
Nov 24, 2025
Response after Non-Final Action
Dec 10, 2025
Request for Continued Examination
Dec 19, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582959
DATA GENERATION DEVICE AND METHOD, AND LEARNING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12380333
METHOD OF CONSTRUCTING NETWORK MODEL FOR DEEP LEARNING, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
93%
With Interview (+65.9%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month