Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 C laims 1- 20 are r ejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims are either directed to a system or a method, which is one of the statutory categories of invention. ( Step 1: YES ) The examiner has identified system Claim 1 1 as the claim that represents the claimed invention for analysis and is similar to Claim s 1 - 2 . Claim 1 1 recites the limitations of ( additional elements emphasized in bold and are considered to be parsed from the remaining abstract idea): “ A system for implementing a machine-learning (ML)-based function, the system comprising: a non-transitory memory device, wherein modules of instruction code are stored, and at least one processor associated with the memory device , and configured to execute the modules of instruction code, whereupon execution of said modules of instruction code, the at least one processor is configured to : provide a NN model comprising a plurality of NN parameters; train the NN model based on a training dataset over a plurality of training epochs, to implement a predefined ML function, wherein each training epoch comprises: adjusting a value of at least one NN parameter based on gradient descent calculation; calculating a profile vector, representing evolution of the at least one NN parameter through the plurality of training epochs; calculating an approximated value of the at least one NN parameter, based on the profile vector; and replacing the at least one NN parameter value with the approximated value, to obtain an approximated version of the NN model. ” Which is a process that, under its broadest which is a process that, under its broadest reasonable interpretation, covers performance of the limitation(s) as a Mental process (concept performed in the human mind) and Mathematical Concept (mathematical relationships, formulas or equations, or calculations) dynamic adjustment of neural network parameters by using mathematical approximation/forecasting (e.g., via partial differential equations) (see, e.g., Fig s . 5A-5B and ¶22-28) . If a claim limitation, under its broadest reasonable interpretation (BRI), covers performance of the limitation as a certain method of a fundamental economic practice , then it falls within the “ Certain Methods of Organizing Human Activity ” grouping of abstract ideas. Similarly if a claim limitation under its BRI, covers performance of the limitation in the human mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. (Claims can recite a mental process even if they are claimed as being performed on a computer Gottschalk v. Benson , 409 U.S. 63; "Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). ) Accordingly, the claim recites an abstract idea . (Step 2A-Prong 1: YES. The claims are abstract) This judicial exception is not integrated into a practical application. Limitations that are not indicative of integration into a practical application include: (1) Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (MPEP 2106.05.f), (2) Adding insignificant extra-solution activity to the judicial exception (MPEP 2106.05.g), (3) Generally linking the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05.h). Claims 1-2 do not recite any hardware components and have been read as including said generic components. The computer hardware is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception. Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore claim 1 1 is directed to an abstract idea without a practical application. ( Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application ) . The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using computer hardware amounts to no more than mere instructions to implement an abstract idea by adding the words “apply it” (or an equivalent) with the judicial exception. Mere instructions to implement an abstract idea on or with the use of generic computer components, cannot provide an inventive concept - rendering the claim patent ineligible. Thus claim 8 is not patent eligible. (St ep 2B: NO. The claims do not provide significantly more ) The dependent claims further define the abstract idea that is present in their respective independent claims and hence are abstract for at least the reasons presented above. The dependent claims do not include any additional element s ( dependent claims discussing different approximation methods or updates to the neural network’s parameters which, broadly read are all generic computer components that further implement the abstract idea) that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination : The dependent claims recite further steps that can be performed in the human mind. Therefore, the dependent claims are directed to an abstract idea. Thus, the aforementioned claims are not patent-eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale , or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 -3 and 11-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tano et al. “ Accelerating training in artificial neural networks with dynamic mode decomposition ,” (“Tano”) (available June 18, 2020). Tano teaches: Re 1 : A method of implementing a machine-learning (ML)-based function (abstract; Fig. 1; Algorithm 1) , the method comprising : providing a Neural Network (NN) model comprising a plurality of NN parameters (Fig.1 §2-3: discussing flattening weights and having a snapshot matrix ; Algorithm 1) ; training the NN model over a plurality of training epochs, to implement a predefined ML function, based on a training dataset (Algorithm 1; § 4) ; for one or more NN parameters of the plurality of NN parameters: calculating a profile vector, representing evolution of the NN parameter through the plurality of training epochs ( § 2: discussing “tracking the evolution of weights” ; § 3: see snapshot matrix aross optimizer steps ) ; and calculating an approximated value of the at least one NN parameter, based on the profile vector ( § 2: “evaluate an approximate converged state”; § 3: DMD evolution/ Eq. 5) ; and replacing at least one NN parameter value in the trained NN model with a respective calculated approximated value, to obtain an approximated version of the trained NN model ( Algorithm 1 : “Assign updated weights to layer l in the neural network” ) . Re 2 : A method of training a NN model (abstract; Fig. 1; Algorithm 1) , the method comprising: providing a NN model comprising a plurality of NN parameters (Fig.1 §2-3: discussing flattening weights and having a snapshot matrix ; Algorithm 1) ; training the NN model based on a training dataset over a plurality of training epochs (Algorithm 1; §4) , to implement a predefined ML function (Algorithm 1; §4) , wherein each training epoch comprises (Algorithm 1; §4) : adjusting a value of at least one NN parameter based on gradient descent calculation (Algorithm 1 : “Do backpropagation step” ; § 2: discussion of backprop/optimizer steps ) ; calculating a profile vector, representing evolution of the at least one NN parameter through the plurality of training epochs (§2: tracking evolution ; §3: snapshot matrix across steps ) ; calculating an approximated value of the at least one NN parameter, based on the profile vector (§2: approximate converged state ; §3: DMD evolution ) ; and replacing the at least one NN parameter value with the approximated value, to obtain an approximated version of the NN model (Algorithm 1 : “Assign updated weights … ”) . Re 3 : receiving an input data sample ( § 4: inputs are uncertain parameters/samples) ; and inferring the approximated version of the NN model on the input data sample, to implement the ML function on the input data sample ( § 4: trained network used to rapidly evaluate/predict the regression output with the trained/updated weights) . Re 11 : A system for implementing a machine-learning (ML)-based function (abstract; Fig. 1; Algorithm 1) , the system comprising (abstract; Algorithm 1: non-transitory memory and processor are inherent in the teachings of Tano which teach training neural networks over thousands of epochs ) : a non-transitory memory device, wherein modules of instruction code are stored, and at least one processor associated with the memory device, and configured to execute the modules of instruction code, whereupon execution of said modules of instruction code, the at least one processor is configured to: provide a NN model comprising a plurality of NN parameters (Fig. 1; § 3- 4 ) ; train the NN model based on a training dataset over a plurality of training epochs (Algorithm 1; §3-4 ; Fig. 1 ) , to implement a predefined ML function, wherein each training epoch comprises: adjusting a value of at least one NN parameter based on gradient descent calculation (Algorithm 1) ; calculating a profile vector, representing evolution of the at least one NN parameter through the plurality of training epochs ( § 2 - 3: tracking evolution and using a snapshot matrix ) ; calculating an approximated value of the at least one NN parameter, based on the profile vector ( § 2: approximate converged state discussed ; §3 : DMD evolution ) ; and replacing the at least one NN parameter value with the approximated value, to obtain an approximated version of the NN model (Algorithm 1: “Assign updated weights…”) . Re 12 : wherein the at least one processor is further configured to: receive an input data sample; and infer the approximated version of the NN model on the input data sample, to implement the ML function on the input data sample (§ 4: regression task: apply trained/updated model to inputs to produce outputs ) . Allowable Subject Matter Claims 4-10 and 13-17 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 101 set forth in this Office action. The following is a statement of reasons for the indication of allowable subject matter. In combination with the other limitations nothing in the prior art of record teaches, suggests, or discloses: Re 4-10 : in claim 4, “ wherein said training comprises a preliminary stage comprising: training the NN model based on a training dataset over a first bulk of training epochs; grouping the plurality ofNN parameters as members of a plurality of modes, based on their respective profile vectors; and calculating the approximated values of NN parameters based on said grouping. Re 13-17 : in claim 13, “ wherein said training comprises a preliminary stage, where the at least one processor is further configured to: train the NN model based on a training dataset over a first bulk of training epochs; group the plurality of NN parameters as members of a plurality of modes, based on their respective profile vectors; and calculate the approximated values of member NN parameters based on said grouping. ” Conclusion Relevant prior art considered : US 20200388360 teaching where an artificial neural network is trained to predict successful patient care programs based on historical medical and socials outcomes for a plurality of patients, wherein the historical medical and social outcomes are represented in respective vector arrays. US 20220147818 : a computer-implemented method of training an auxiliary machine learning model to predict a set of new parameters of a primary machine learning model, wherein the primary model is configured to transform from an observed subset of a set of real-world features to a predicted version of the set of real-world features. US 12430560 : a computer-implemented method for distributed synchronous training of a neural network model includes performing, by a worker machine of a plurality of worker machines, a forward computation of a training data set using a plurality of N layers of the neural network model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT GERALD J SUFLETA II whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-4279 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M-F 9AM-6PM EDT/EST . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT ABDULMAJEED AZIZ can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 270-5046 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. FILLIN "Examiner Stamp" \* MERGEFORMAT GERALD J. SUFLETA II Primary Examiner Art Unit 2875 /GERALD J SUFLETA II/ Primary Examiner, Art Unit 2875