Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Objections Claim 2 is objected to because of the following informalities: In Claim 2, line 5, “outputs” was probably meant to be: output . Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 : All claims are directed towards either a method or an apparatus and thus satisfies Step 1 as falling into one of the statutory categories. Step 2A, Prong One : Independent Claim 1 recites (the same analysis applies to similar independent Claim 5): and a discriminator configured to use, as an input, output data, of a first encoder included in the first autoencoder, or a second encoder included in the second autoencoder, to output a probability that the output data is output data representing a feature for any one of the target domain and the source domain . this limitation, under its broadest reasonable interpretation, and in view of paragraphs 31-32 of the specification as filed, is considered as falling under the “Mathematical Concepts” groupings of abstract ideas. Additionally, this limitation, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and therefore would fall under the “Mental Processes” groupings of abstract ideas. That is the human mind is capable of determining whether data represents a feature of a particular domain with some degree of certainty/probability based on observation and evaluation (the autoencoder and encoder representing tools to perform the abstract idea - see MPEP 2106.05(f) ). Step 2A, Prong Two : Claim 1 recites the additional elements of ( the same analysis applies to similar independent Claim 5 ): input (i) a normal data collection for a first system that is a target domain and (ii) a normal data collection for a second system that is a source domain ; this limitation is considered as adding insignificant extra-solution activity (inputting data) to the judicial exception - see MPEP 2106.05(g) . and train a model , this limitation is considered as using a model to perform the abstract idea , which includes training the model - see MPEP 2106.05(f) . the model including a first autoencoder configured to input normal data for the target domain, based on the normal data collection for the first system, a second autoencoder configured to input normal data for the source domain, based on the normal data collection for the second system , this limitation is considered as adding the words “apply it” (or an equivalent) with the judicial exception - see MPEP 2106.05( f ). The further additional elements of “ circuitry ” and/or a “computer” as recited in these independent claims are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using generic computer component s . Accordingly, th ese additional element s do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim s are therefore directed to an abstract idea. Step 2B : The claim s do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element s are considered as using a model to perform the abstract idea, which includes training the model - see MPEP 2106.05(f) , and as adding insignificant extra-solution activity (inputting data) to the judicial exception which are considered as appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception - see MPEP 2106.05(d) . The further additional elements of “circuitry” and/or a “computer” as recited in these independent claims amounts to no more than mere instructions to apply the exception using generic computer component s . Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim s are therefore not patent eligible. Dependent Claim 2 is also considered as falling under the “Mathematical Concepts” groupings of abstract ideas in view of paragraphs 30-32 of the specification as filed. Dependent Claim 3 is considered as generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) . Dependent Claim s 4 and 6 are considered as adding the words “apply it” (or an equivalent) with the judicial exception - see MPEP 2106.05(f) (the determination of anomaly detection being previously considered as pointed out above). Dependent Claim 7 is considered as adding the words “apply it” (or an equivalent) with the judicial exception - see MPEP 2106.05(f) . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 -7 are rejected under 35 U.S.C. 103 as being unpatentable over Abrol , US 2021 / 0312674 A1 , in view of Okanohara , US 2021 / 0011791 A1 . Regarding Claim 1 , Abrol teaches: A learning apparatus comprising: circuitry configured to: input (i) a normal data collection for a first system that is a target domain and (ii) a normal data collection for a second system that is a source domain ; and train a model (Abstract: “ training, by a system operatively coupled to a processor, a post-processing model to correct an image-based inference output of a source image processing model that results from application of the source image processing model to a target image from a target domain that differs from a source domain, wherein the source image processing model was trained on source images from the source domain ”), the model including a first autoencoder configured to input normal data for the target domain, based on the normal data collection for the first system (paragraph 33: “ the post-processing model can comprise a shape autoencoder (SAE) consisting of an encoder network and a decoder network ” ; And, paragraph 35: “ the post-processing model can be trained using ground truth data for the source domain, ground truth data for the target domain, or a combination of both ” ) , a second autoencoder configured to input normal data for the source domain, based on the normal data collection for the second system (paragraph 46: “ The source domain model 108 can include an image processing model trained to perform a specific image processing task on images from a source domain ”; And, paragraph 100: “ the source domain model 108 can also employ an autoencoder including an encoder network 1402 and a decoder network ”) . Although Abrol may have taught a discriminator (see for example paragraphs 36, 102), Okanohara more directly shows: and a discriminator configured to use, as an input, output data, of a first encoder included in the first autoencoder, or a second encoder included in the second autoencoder, to output a probability that the output data is output data representing a feature for any one of the target domain and the source domain ( paragraph s 103 -104 : “I n the learning at regularization stage, discriminators are learned, and at that time, learning is also performed for encoders that infer the expression z 0 of input data to be input to the discriminator. First, the training data x which is normal data is input to the encoder, and the expression z 0 of the input data is inferred based on the latent variable model, and on the other hand, sampling is performed from the sample generator (Sampler) to produce a false expression z 1 . It is desirable to prepare multiple z 0 and z 1 for suitable learning. Based on z 0 and z 1 , training data (z 0 , 0), (z 1 ,1) to be input to the discriminator are created ”. That is the output z’s of the encoder are used as input to the discriminator. And further in paragraph 104 : “ The learned discriminator outputs the probability that the input is real data ”; the real data will include features of any one of the target domain and /or the source domain with a degree of certainty represented by the probability ) . It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teachings of Okanohara with that of Abrol for having a discriminator configured to use, as an input, output data, of a first encoder included in the first autoencoder, or a second encoder included in the second autoencoder, to output a probability that the output data is output data representing a feature for any one of the target domain and the source domain . The ordinary artisan would have been motivated to modify Abrol in the manner set forth above for the purposes of having highly accurate abnormality detection by using the encoder outputs and the discriminator [Okanohara: paragraph 106]. Regarding Claim 2 , Abrol further teaches: The learning apparatus according to claim 1, wherein the circuitry is configured to learn parameters of the model such that, a difference between an input and an output of the first autoencoder and a difference between an input and an output of the second autoencoder are minimized (paragraph 104: “ trains a shape autoencoder model to correct an image-based inference output of a source domain model that results from application of the source domain model to a target image from a target domain model that differs from a source domain, wherein the source domain model was trained on source images from the source domain ”. The training of the models involving the learning/optimizing of their parameters so that differences between their inputs and outputs are minimized; see paragraph 65: “ In accordance with the semi-supervised and the unsupervised learning methods, the SAE can be trained to generate the predicted inference output 312 by minimizing losses with the corresponding ground truth masks at the network output ”; SAE being the autoencoder) , And Okanohara further teaches: and the probability that the discriminator outputs is maximized (paragraph 104: “ a regularization error is obtained in the process of distinguishing between normal data and false data in the discriminator, and not only the discriminator but also the parameters of the encoder are updated and learned using the regularization error, so that this improves the accuracy of inference in the encoder and improves the discrimination accuracy of the discriminator ”; The improvement of the discrimination accuracy of the discriminator indicating that the discriminator output is maximized . Examiner’s note: see also Lee, US 2021 / 0224606 A1 , for example paragraph 33 ) . Regarding Claim 3 , Abrol further teaches: The learning apparatus according to claim 1, wherein the number of pieces of data included in the normal data collection for the target domain is smaller than the number of pieces of data included in the normal data collection for the source domain (Abstract: “ training, by a system operatively coupled to a processor, a post-processing model to correct an image-based inference output of a source image processing model that results from application of the source image processing model to a target image from a target domain that differs from a source domain, wherein the source image processing model was trained on source images from the source domain ”; And, paragraph 31: “ The disclosed subject matter is directed to systems, computer-implemented methods, apparatus and/or computer program products that facilitate domain adaptation of image processing models with no or very little labelled data/ground truth for the target domain using unsupervised and semi-supervised machine learning methods ”. That is target domain data is smaller than that of the source domain data) . Regarding Claim 4 , Abrol further teaches: An anomaly detection apparatus comprising: circuitry configured to determine whether an anomaly has occurred in a system, using (i) the first autoencoder included in the model that is trained by the learning apparatus according to claim 1 and (ii) data for the system that is a target on which anomaly detection is performed (Abstract: “ training, by a system operatively coupled to a processor, a post-processing model to correct an image-based inference output of a source image processing model that results from application of the source image processing model to a target image from a target domain that differs from a source domain, wherein the source image processing model was trained on source images from the source domain ” . And, paragraph 31: “ the image processing models can include artificial intelligence/machine learning (AI/ML) based medical image processing models, such as organ segmentation models, anomaly detection models, image reconstruction models, and the like. The disclosed domain adaptation techniques can also be extended to AI/ML image analysis/processing models configured to perform similar inferencing tasks on non-medical domains ”. See also Okanohara, for example paragraph 111 , “ abnormality detection target is input into the encoder of the learned model, the expression of the data of abnormality detection target is inferred, and the restored data is generated from the expression in the decoder. The obtained restored data is compared with the input data for abnormality detection target, and abnormality is detected from the deviation between them ”; abnormality being representative of the anomaly ) . Claims 5 and 6 are similar to Claims 1 and 4 respectively and are rejected under the same rationale as stated above for those claims. Regarding Claim 7 , Okanohara teaches: A non-transitory computer readable medium storing a program that causes a computer to execute the learning method according to claim 5 (see claim 39). (Emphasis added). Examiner’s Note : The Examiner cites particular pages, sections, columns, line numbers, and/or paragraphs in the references as applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner and the additional related prior arts made of record that are considered pertinent to applicant's disclosure to further show the general state of the art. The Examiner's interpretations in parenthesis are provided with the cited references to assist the applicants to better understand how the examiner interprets the prior art to read on the claims. Such comments are entirely consistent with the intent and spirit of compact prosecution. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892 for the relevant prior art where for example Fan, US 20210049452 A1 , teaches anomaly detection and usage of a discriminator. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT DAVE MISIR whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-5243 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-R 8-5 pm, F some hours . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Abdullah Al Kawsar can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 5712703169 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVE MISIR/ Primary Examiner, Art Unit 2127