DETAILED ACTION This is a non-final, first office action on the merits. Claims 1-15 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Specifically, claims 1-15 are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. With respect to Step 2A Prong One of the framework, claims 1 and 14-15 recite an abstract idea. Claims 1 and 14-15 include “ providing a quality function, which measures for a reconstructed training example to what extent it belongs to an expected domain or distribution of the training examples; providing a variable B of a batch of training examples, with which has been trained; dividing a gradient dL/ dMw of the cost function ascertained during the training according to parameters which characterize a behavior, into a partition made up of B components; reconstructing, from each component of the gradient dL/ dMw of the cost function, a training example, using a functional dependency of outputs of neurons in an input layer which receives the training examples from the parameters of the neurons and from the training examples; assessing the reconstructions using the quality function; and optimizing the partition into the components with an aim of improving their assessment via the quality function upon renewed division of the gradient dL/ dMw of the cost function and reconstruction of new training examples ”. The limitations above recite an abstract idea under Step 2A Prong One. More particularly, the elements above recite mental processes-concepts performed in the human mind (including an observation, evaluation, judgment, opinion) and mathematical relationships because the elements describe a process for reconstructing training. As a result, claims 1 and 14-15 recite an abstract idea under Step 2A Prong One. Claims 2-13 further describe the process for reconstructing training. As a result, claims 2-13 recite an abstract idea under Step 2A Prong One for the same reasons as stated above with respect to claims 1 and 14-15. With respect to Step 2A Prong Two of the framework, claims 1 and 14-15 do not include additional elements that integrate the abstract idea into a practical application. Claims 1 and 14-15 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 1 and 14-15 include a neural network, multiple computers, and a non-transitory machine-readable data medium executed by one or multiple computers. When considered in view of the claim as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computing elements are generic computing elements that are merely used as a tool to perform the recited abstract idea. As a result, claims 1 and 14-15 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. Claims 2-3 and 8 do not include any additional elements beyond those recited with respect to claims 1 and 14-15. As a result, claims 2-3 and 8 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two for the same reasons as stated above with respect to claims 1 and 14-15. Claims 4-7 and 9-13 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 4-7 and 9-13 include a neural network and a Generative Adversarial Network (GAN). When considered in view of the claims as a whole, the additional elements do not integrate the abstract idea into a practical application because the additional computing elements do no more than generally link the use of the recited abstract idea to a particular technological environment. As a result, claims 4-7 and 9-13 do not include additional elements that integrate the abstract idea into a practical application under Step 2A Prong Two. Further, claim 10 would consider eligible if it is amended to further recite that, in response to the activation signal, the activated system performs specific executable operations. In particular, the claim should clearly state that the activation signal triggers the execution of defined processes by the activated system, rather than merely indicating activation. Including such language would more concretely tie the activation signal to a functional outcome, thereby supporting eligibility . With respect to Step 2B of the framework, claims 1 and 14-15 do not include additional elements amounting to significantly more than the abstract idea. As noted above, claims 1 and 14-15 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 1 and 14-15 include a neural network, multiple computers, and a non-transitory machine-readable data medium executed by one or multiple computers. The additional elements do not amount to significantly more than the abstract idea because the additional computing elements are generic computing elements that are merely used as a tool to perform the recited abstract idea. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, independent claims 1 and 14-15 do not include additional elements that amount to significantly more than the abstract idea under Step 2B. Claims 2-3 and 8 do not include any additional elements beyond those recited with respect to claims 1 and 14-15. As a result, claims 2-3 and 8 do not include additional elements that amount to significantly more than the abstract idea under Step 2B for the same reasons as stated above with respect to claims 1 and 14-15. Claims 4-7 and 9-13 include additional elements that do not recite an abstract idea under Step 2A Prong One. The additional elements of claims 4-7 and 9-13 include a neural network and a Generative Adversarial Network (GAN). The additional elements do not amount to significantly more than the abstract idea because the additional computing elements do no more than generally link the use of the recited abstract idea to a particular technological environment. Further, looking at the additional elements as an ordered combination adds nothing that is not already present when considering the additional elements individually. As a result, claims 4-7 and 9-13 do not include additional elements that amount to significantly more than the abstract idea under Step 2B. Further, claim 10 would consider eligible if it is amended to further recite that, in response to the activation signal, the activated system performs specific executable operations. In particular, the claim should clearly state that the activation signal triggers the execution of defined processes by the activated system, rather than merely indicating activation. Including such language would more concretely tie the activation signal to a functional outcome, thereby supporting eligibility . Therefore, the claims are directed to an abstract idea without additional elements amounting to significantly more than the abstract idea. Accordingly, claims 1-15 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Allowable Subject Matter Claims 1-15 appear to be allowable if rewritten to overcome the 35 USC § 101 rejection. The prior art references most closely resembling the Applicant’s claimed inventio n Besenbruch et al. (US Pub No. 2022/0279183) (hereinafter Besenbruch et al . ), in view of Baker et al. (US Pub No. 2020/0134451) (hereinafter Baker et al. ), and further in view of P Qin, Q Li, J Zeng, H Liu, Y Cui et al. (Fully convolutional-based dense network for lung nodule image retrieval algorithm) - International Journal of …, 2019 - ijpe-online.com (hereinafter Cui et al. ). Besenbruch discloses providing a quality function, which measures for a reconstructed training example to what extent it belongs to an expected domain or distribution of the training examples; providing a variable B of a batch of training examples, with which the neural network has been trained; reconstructing, from each component of the gradient dL/ dMw of the cost function, a training example, using a functional dependency of outputs of neurons in an input layer of the neural network which receives the training examples from the parameters of the neurons and from the training examples; and assessing the reconstructions using the quality function (see Besenbruch , para [0394], page 37, page 75, page 154, and pages 114-115). However the system in Besenbruch does not explicitly disclose dividing a gradient dL/ dMw of the cost function ascertained during the training according to parameters which characterize a behavior of the neural network, into a partition made up of B components; and optimizing the partition into the components with an aim of improving their assessment via the quality function upon renewed division of the gradient dL/ dMw of the cost function and reconstruction of new training examples . Moreover, neither Besenbruch et al., Baker et al., nor Cui et al. disclose optimizing the partition into the components with an aim of improving their assessment via the quality function upon renewed division of the gradient dL/ dMw of the cost function and reconstruction of new training examples . Baker discloses dividing a gradient dL/ dMw of the cost function ascertained during the training according to parameters which characterize a behavior of the neural network, into a partition made up of B components (see Baker, para [0034]). Cui discloses a gradient dL/ dMw of the cost function (see Cui, pages 330-333, wherein training in the network, the quadratic cost function is used as the loss function Moreover, since the specific combination of claim elements optimizing the partition into the components with an aim of improving their assessment via the quality function upon renewed division of the gradient dL/ dMw of the cost function and reconstruction of new training examples recited in claims 1 and 14-15 cannot be found in the cited prior art and can only be found as recited in Applicant’s Specification, any combination of the cited references and/or additional references(s) to teach all the claim elements, including the aforementioned features not taught by the cited prior art, would be the result of impermissible hindsight reconstruction. Accordingly, a combination of Besenbruch et al., Baker et al., Cui et al., and/or any other additional reference(s) would be improper to teach the claimed invention. While the teachings of Besenbruch et al., Baker et al., and Cui et al. separately address different parts of the claimed invention, these teachings would not be combinable by one of ordinary skill in the art at the time of the invention with a reasonable expectation of success to provide a predictable combination that would render the claimed invention obvious. Thus, the novelty of the claimed invention is in the combination of limitations rather than any single limitation. Conclusion The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure. Lidar et al. US Pub No. 2019/0095799 discloses a discrete optimization problem associated with an objective function, and for each additional proposed solution of the sequence, derives an estimate of a quality distribution that is based on the sequence including the additional proposed solution. The quality distribution assigns a probability to each of the proposed solutions according to quality of the proposed solution. Roth et al. US Pat No. 2021/0374502 discloses neural network architecture from a plurality of neural networks in a federated learning (FL) setting. Yin et al. US Pub No. 2022/0284232 discloses techniques to identify one or more images used to train one or more neural networks. Bercich et al. US Pub No. 2021/0089899 discloses a first data with the first neural network, process a second data with the second neural network, update a weight in a node of the second neural network by a delta amount as a function of the processing of the second data with the second neural network, and update a weight in a node of the first neural network as a function of the delta amount. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT HAFIZ A KASSIM whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-8534 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT 9:00 - 5:00 PM . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Rutao Wu can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-6045 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAFIZ A KASSIM/ Primary Examiner, Art Unit 3623 03/02/2026