Prosecution Insights
Last updated: April 19, 2026
Application No. 18/052,092

System and Method for Training of neural Network Model for Control of High Dimensional Physical Systems

Final Rejection §101§103§112
Filed
Nov 02, 2022
Examiner
BALAKRISHNAN, VIJAY MURALI
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Mitsubishi Electric Research Laboratories Inc.
OA Round
2 (Final)
43%
Grant Probability
Moderate
3-4
OA Rounds
3y 12m
To Grant
99%
With Interview

Examiner Intelligence

Grants 43% of resolved cases
43%
Career Allow Rate
6 granted / 14 resolved
-12.1% vs TC avg
Strong +86% interview lift
Without
With
+85.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 12m
Avg Prosecution
26 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
26.4%
-13.6% vs TC avg
§103
31.5%
-8.5% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
24.3%
-15.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This final action is in response to the amendment and remarks filed on 11/25/2025 for application 18/052,092. Claims 1, 8, 11, 18, and 20 have been amended. Claims 3 and 13 are cancelled. Claims 1-2, 4-12, and 14-20 remain pending in the application. Claims 1, 11, and 20 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) filed 10/15/2025 has been fully considered by the examiner. Response to Amendment The amendment filed 11/25/2025 has been entered. Applicant’s amendment to the specification with respect to resolving specification objections has been considered, and overcomes the objections set forth in the office action mailed 08/27/2025. Consequently, the objections have been withdrawn. Applicant’s amendment to the claims with respect to resolving claim objections has been considered, and overcomes the objections set forth in the office action mailed 08/27/2025. Consequently, the objections have been withdrawn. Applicant’s amendment to the claims with respect to resolving indefiniteness rejections under 35 U.S.C. 112(b) has been considered, but does not fully overcome the grounds of rejection set forth in the office action mailed 08/27/2025. Issues previously raised in the rejection of claims 1, 11, and 20 remain unresolved. Applicant is directed towards the grounds of rejection under 35 U.S.C. 112(b) with respect to amended claims 1, 11, and 20 set forth below. All other previously raised indefiniteness rejections have been withdrawn. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-2, 4-12, and 14-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, it recites the limitations “training the neural network model having an autoencoder architecture, wherein the autoencoder architecture includes…a decoder configured to decode the linearly transformed encoded digital representation to minimize a loss function, wherein the loss function includes…”. It is unclear if modifying phrase “to minimize a loss function” is recited as being an objective of “training the neural network model”, as would be ordinarily understood by one of ordinary skill in the art and appears to be consistent with the specification [¶ 0129], or instead is recited as being a result of preceding phrase “decode the linearly transformed encoded digital representation”. However, it is unclear how the recited decoding (as an individual process, rather than the entire training process) would result in the minimization of a loss function, and the specification does not provide further clarification or explanation. Ultimately, the apparent inconsistencies between the specification and claims renders their scope uncertain, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim 1 further recites “a prediction error between outputs of the neural network model decoding measurements of the operation at an instant of time and measurements of the operation collected at a subsequent instance of time”. It is unclear how the recited “decoding measurements of the operation at an instant of time” is interrelated with the previously recited claim terms; for example, the “decoding” step could possibly be referring to either “the neural network model”, or “outputs”, or “a prediction error”. The claim also previously recites “a decoder configured to decode the linearly transformed encoded digital representation…” – it is therefore unclear if the recited “decoding” step is reciting a separate decoding procedure, is further expanding on the same process of “a decoder configured to decode…”, or is stating the recited “outputs of the neural network model” as being a result of the previous decoding. Consequently, one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For purposes of examination and as best understood in light of the instant specification, the limitation “training the neural network model having an autoencoder architecture, wherein the autoencoder architecture includes…a decoder configured to decode the linearly transformed encoded digital representation to minimize a loss function, wherein the loss function includes” is interpreted as: training the neural network model having an autoencoder architecture to minimize a loss function, wherein the autoencoder architecture includes… … a decoder configured to decode the linearly transformed encoded digital representation wherein the loss function includes …” For purposes of examination and as best understood in light of the instant specification, the limitation “a prediction error between outputs of the neural network model decoding measurements of the operation at an instant of time and measurements of the operation collected at a subsequent instance of time” is interpreted as “a prediction error between: outputs of the neural network model, wherein the outputs of the neural network model are decoded measurements of the operation at an instant of time, and measurements of the operation collected at a subsequent instance of time”. Regarding claims 11 and 20, they have substantially similar deficiencies to those found in claim 1 above. Consequently, they are rejected for the same reasons and are interpreted as detailed above. Regarding claims 2, 4-10, 12, and 14-19, they inherit the deficiencies of their parent claims. Consequently, they are also rejected under 35 U.S.C. 112(b) as being indefinite for depending on an indefinite parent claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-12, and 14-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Independent Claims (Claim 1, Claim 11, Claim 20): Step 1: Claim 1 is drawn to a method, claim 11 is drawn to a system/apparatus, and claim 20 is drawn to a product. Therefore, each of these claims falls under one of the four categories of statutory subject matter (process/method, machine/apparatus, manufacture/product, or composition of matter). Step 2A Prong 1: Claims 1, 11, and 20 each recite a judicially recognized exception of an abstract idea. Claim 1 recites, inter alia: an operation of a system having non-linear dynamics, wherein the non-linear dynamics are represented by partial differential equations (PDEs) – The elements recited in the preamble merely place the claim limitations within the context of analyzing measurements (non-linear dynamics) of a physical system (operation of a system) using mathematical concepts (such as partial differential equations (PDE)). A PDE is a type of mathematical equation, and thereby inherently represents mathematical relationships between variables (e.g., rates of change of a function with respect to its variables). encode the digital representation into a latent space – Wherein the digital representation encompasses time series data, this limitation recites a procedure of using mathematical transformations to determine a projection of data into a lower-dimensional latent space, and thereby recites mathematical calculation. It can also be interpreted as reciting a mathematical relationship between variables (time series data and its lower-dimensional representation in the latent space). propagate the encoded digital representation into the latent space with linear transformation determined by values of parameters of the linear predictor – This limitation recites a procedure of using mathematical methods (linear transformation[s]) to determine variables (the linearly transformed encoded digital representation), and thereby recites mathematical calculation. decode the linearly transformed encoded digital representation – Wherein the previous propagate step recites a process of calculating a linearly transformed encoded digital representation, this limitation further recites using mathematical transformations to reproduce higher-dimensional digital representation data from the calculated linearly transformed encoded digital representation, and thereby further recites mathematical calculation. minimize a loss function, wherein the loss function includes a prediction error between outputs of the neural network model decoding measurements of the operation at an instant of time and measurements of the operation collected at a subsequent instance of time, and a residual factor of the PDEs having eigenvalues dependent on the parameters of the linear predictor – This limitation recites a procedure of using mathematical methods (minimiz[ing] a loss function that comprises prediction error and residual factor parameters) to perform optimization, thereby reciting mathematical calculation. performing eigen-decomposition to a Lie operator, wherein the residual factor of the PDE is based on the Lie operator – This limitation amounts to a procedure of using mathematical methods (performing eigen-decomposition) to determine variables (decompos[ed] factors of the Lie operator), and therefore recites mathematical calculation. It can also be interpreted as reciting a mathematical relationship between variables (the residual factor of the PDE and the Lie operator). Claims 11 and 20 recite substantially similar abstract idea limitation to those recited in claim 1, and therefore recite the same judicial exception. Step 2A Prong 2: The following additional elements recited in claims 1, 11, and 20 do not integrate the recited judicial exceptions into a practical application. Claim 1 additionally recites: A computer-implemented method of training a neural network model [for controlling], comprising: – The additional elements recited in the preamble amount to no more than mere instructions to implement an abstract idea on a computer or computer components (computer-implemented method), and merely invoke neural architecture as a tool to perform an existing abstract idea of analyzing measurements of a physical system using mathematical concepts. [model for] controlling an operation of a system – This limitation does no more than suggest a high-level application of the abstract procedure to monitoring control systems, e.g., using results of the recited mathematical calculations to adjust system operations. It therefore does no more than generally link a judicial exception to the technological environment of adaptive control systems without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology). collecting a digital representation of time series data indicative of measurements of the operation of the system at different instances of time; – This limitation does no more than suggest capturing measurements of a physical system at different instances of time and using said measurements as input data, i.e., merely selecting a particular data source or type of data to be manipulated. It therefore recites insignificant extra-solution activity. training the neural network model having an autoencoder architecture, wherein the autoencoder architecture includes an encoder configured to [encode]; a linear predictor configured to [propagate]; and a decoder configured to [decode] – As similarly recited in the preamble (training a neural network model), this limitation does no more than merely invoke elements (encoder, linear predictor, decoder) of a neural architecture (autoencoder architecture) as tools to perform an existing abstract idea of analyzing measurements of a physical system using mathematical concepts. Claim 11 recites substantially similar additional elements to those recited in claim 1, and further recites: A training system, the training system comprising at least one processor; and a memory having instructions stored thereon that, when executed by the at least one processor, cause the training system to: [collect] – This limitation amounts to mere instructions to implement an abstract idea on a computer or computer components. Claim 20 recites substantially similar additional elements to those recited in claim 1, and further recites: A non-transitory computer readable storage medium embodied thereon a program executable by a processor for performing a method, the method comprising: – This limitation amounts to mere instructions to implement an abstract idea on a computer or computer components. Step 2B: The additional elements recited in claims 1, 11, and 20, viewed individually or as an ordered combination, do not provide an inventive concept or significantly more than the recited abstract ideas themselves. Claim 1 additionally recites: A computer-implemented method of training a neural network model [for controlling], comprising: – Mere instructions to implement an abstract idea on a computer or computer components (A computer-implemented method) and mere invocations of neural architecture as a tool to perform an existing abstract idea (training a neural network model) do not provide an inventive concept or significantly more to the recited abstract idea. [model for] controlling an operation of a system – Generally linking a judicial exception to the technological environment of adaptive control systems without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology) does not provide an inventive concept or significantly more to the recited abstract idea. collecting a digital representation of time series data indicative of measurements of the operation of the system at different instances of time; –Receiving and transmitting data is well-understood, routine, and conventional activity (see MPEP § 2106.05(d); Receiving or transmitting data over a network”); further, data-driven methods for analyzing the dynamics of a system based on its observed measurements over time (e.g., Koopman operator / dynamic mode decomposition (DMD)) are also well-understood, routine, and conventional in the art (see Parmar et al., “A Survey on the Methods and Results of Data-Driven Koopman Analysis in the Visualization of Dynamical Systems”, [page 1 Introduction] and [page 6 Common Uses of Data-Driven DMD]). The recited extra-solution activity of capturing measurements of a physical system at different instances of time and using said measurements as input data thereby does not provide an inventive concept or significantly more to the recited abstract idea. training the neural network model having an autoencoder architecture, wherein the autoencoder architecture includes an encoder configured to [encode]; a linear predictor configured to [propagate]; and a decoder configured to [decode] – Merely invoking elements (encoder, linear predictor, decoder) of a neural architecture (autoencoder architecture) as tools to perform an existing abstract idea does not provide an inventive concept or significantly more to the recited abstract idea. Claim 11 recites substantially similar additional elements to those recited in claim 1, and further recites: A training system, the training system comprising at least one processor; and a memory having instructions stored thereon that, when executed by the at least one processor, cause the training system to: [collect] – Mere instructions to implement an abstract idea on a computer or computer components do not provide an inventive concept or significantly more to the recited abstract idea. Claim 20 recites substantially similar additional elements to those recited in claim 1, and further recites: A non-transitory computer readable storage medium embodied thereon a program executable by a processor for performing a method, the method comprising: – Mere instructions to implement an abstract idea on a computer or computer components do not provide an inventive concept or significantly more to the recited abstract idea. Even when considered as an ordered combination, the claims ultimately do no more than generically place the claims within the context of adaptive control systems and invoke neural architecture as mere tools to perform abstract ideas; they thereby do not amount to significantly more than the recited abstract ideas themselves. As such, claims 1, 11, and 20 are not patent eligible. Dependent Claims (Claims 2 and 4-10, Claims 12 and 14-19): Dependent claims 2, 4-10, 12, and 14-19 narrow the scope of independent claims 1 and 11, and thus merely narrow the recited judicial exceptions. With respect to the independent claims, the recited judicial exceptions are not meaningfully integrated into a practical application, and also do not amount to significantly more than the recited abstract ideas themselves. The dependent claims recite abstract idea limitations similar to those recited within the independent claims, as they also do not provide anything more than mathematical concepts or mental processes that are capable of being performed in the human mind and/or using pen and paper. The dependent claims also do not recite any further additional elements that successfully integrate the recited judicial exceptions into a practical application or amount to significantly more than the recited abstract ideas themselves. Consequently, claims 2, 4-10, 12, and 14-19 are also rejected under 35 U.S.C. 101. Step 1: Claims 2 and 4-10 are drawn to a method, and claims 12 and 14-19 are drawn to a system/apparatus. Therefore, each of these claims falls under one of the four categories of statutory subject matter (process/method, machine/apparatus, manufacture/product, or composition of matter). Step 2A Prong 1: Claims 2, 4-10, 12, and 14-19 each recite a judicially recognized exception of an abstract idea. Claim 2 recites, inter alia: [controlling the system by] using a linear control law including a control matrix formed by the values of the parameters of the linear predictor – This limitation amounts to a procedure of using mathematical methods (using a linear control law) to determine variables (e.g., control operations), and therefore recites mathematical calculation. It can also be interpreted as reciting a mathematical relationship (linear correlation) between variables (e.g., values of the parameters of the linear predictor and control operations). Claim 4 recites the same judicial exception as claim 1. Claim 5 recites, inter alia: wherein the linear predictor is based on a reduced-order model, wherein the reduced-order model is represented by a Koopman operator – This limitation recites using mathematical methods (Koopman operator) to determine variables (reduced-order model), and therefore recites mathematical calculation. Claim 6 recites, inter alia: approximating the Koopman operator by use of a data-driven approximation technique – Given that approximating the Koopman operator recites using mathematical methods to determine an approximation (i.e., mathematical calculation), the recited by use of clause does no more than further suggest a generic data-driven designation for the calculation procedure, and thereby merely expands upon the recited mathematical concept (i.e., abstract idea) itself. Claim 7 recites, inter alia: approximating the Koopman operator by use of a deep learning technique – Given that approximating the Koopman operator recites using mathematical methods to determine an approximation (i.e., mathematical calculation), the recited by use of clause does no more than further suggest a generic deep learning designation for the calculation procedure, and thereby merely expands upon the recited mathematical concept (i.e., abstract idea) itself. Claim 8 recites, inter alia: generating collocation points associated with a function space of the system, wherein the generating is based on the PDEs, the digital representation of time series data and the linearly transformed encoded digital representation; – Generating collocation points amounts to using mathematical methods (collocation methods) to determine locations (collocation points) that satisfy a given differential equation (associated with a function space of the system), and therefore recites mathematical calculation. Claim 9 recites the same judicial exception as claim 1. Claim 10 recites the same judicial exception as claim 1. Claim 12 and 14-19 recite substantially similar abstract idea limitations to those recited in claims 2 and 4-9, and therefore recite the same judicial exceptions. Step 2A Prong 2: Claims 5, 7, 15, and 17 do not recite any further additional elements besides those already recited in the independent claims. The following additional elements recited in claims 2, 4, 6, 8-10, 12, 14, 16, and 18-19 also do not integrate the recited judicial exceptions into a practical application. Claim 2 additionally recites: controlling the system by [using a linear control law] – This limitation does no more than suggest a high-level application of the abstract procedure to monitoring control systems, e.g., using results of the recited mathematical calculations to adjust system operations. It therefore does no more than generally link a judicial exception to the technological environment of adaptive control systems without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology). Claim 4 additionally recites: wherein the digital representation of the time series data is obtained by use of computational fluid dynamics (CFD) simulation or experiments – This limitation does no more than generally link a judicial exception to the field of use of computational fluid dynamics (CFD) without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology). Claim 6 additionally recites: wherein the data-driven approximation technique is generated using numerical or experimental snapshots – This limitation does no more than suggest capturing measurements of a physical system at different instances of time and using said measurements as input data, i.e., merely selecting a particular data source or type of data to be manipulated. It therefore recites insignificant extra-solution activity. Claim 8 additionally recites: and training the neural network model based on the generated collocation points – This limitation does no more than merely invoke neural architecture as a tool to perform an existing abstract idea of analyzing measurements of a physical system using mathematical concepts. Claim 9 additionally recites: generating control commands to control the system based on at least one of: a model-based control and estimation technique or an optimization-based control and estimation technique – This limitation does no more than suggest a high-level application of the abstract procedure to monitoring control systems, e.g., using results of the recited mathematical calculations to generate control commands for adjusting system operations. The recited model-based control and estimation / optimization-based control and estimation technique are generic designations that do not provide anything of significance to the recited generation procedure besides high-level concepts of using models/performing optimization when applying calculation results to system operations. Ultimately, the limitation thereby does no more than generally link a judicial exception to the technological environment of adaptive control systems without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology). Claim 10 additionally recites: generating control commands to control the system based on a data-driven based control and estimation technique – This limitation does no more than suggest a high-level application of the abstract procedure to monitoring control systems, e.g., using results of the recited mathematical calculations to generate control commands for adjusting system operations. The recited data-driven based control and estimation technique is a generic designation that does not provide anything of significance to the recited generation procedure besides a high-level concept of analyzing data when applying calculation results to system operations. Ultimately, the limitation thereby does no more than generally link a judicial exception to the technological environment of adaptive control systems without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology). Claims 12, 14, 16, and 18-19 recite substantially similar additional elements to those recited in claims 2, 4, 6, and 8-9, and therefore also do not integrate the recited judicial exceptions into a practical application. Step 2B: The additional elements recited in claims 2, 4, 6, 8-10, 12, 14, 16, and 18-19, viewed individually or as a combination, do not provide an inventive concept or otherwise amount to significantly more than the recited abstract ideas themselves. Claim 2 additionally recites: controlling the system by [using a linear control law] – Generally linking a judicial exception to the technological environment of adaptive control systems without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology) does not provide an inventive concept or significantly more to the recited abstract idea. Claim 4 additionally recites: wherein the digital representation of the time series data is obtained by use of computational fluid dynamics (CFD) simulation or experiments – Generally linking a judicial exception to the field of use of computational fluid dynamics (CFD) without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology) does not provide an inventive concept or significantly more to the recited abstract idea. Claim 6 additionally recites: wherein the data-driven approximation technique is generated using numerical or experimental snapshots – Receiving and transmitting data is well-understood, routine, and conventional activity (see MPEP § 2106.05(d); Receiving or transmitting data over a network”); further, data-driven methods for analyzing the dynamics of a system based on its observed measurements over time (e.g., Koopman operator / dynamic mode decomposition (DMD)) are also well-understood, routine, and conventional in the art (see Parmar et al., “A Survey on the Methods and Results of Data-Driven Koopman Analysis in the Visualization of Dynamical Systems”, [page 1 Introduction] and [page 6 Common Uses of Data-Driven DMD]). The recited extra-solution activity of capturing measurements of a physical system at different instances of time and using said measurements as input data thereby does not provide an inventive concept or significantly more to the recited abstract idea. Claim 8 additionally recites: and training the neural network model based on the generated collocation points – Merely invoking neural architecture as a tool to perform an existing abstract idea does not provide an inventive concept or significantly more to the recited abstract idea. Claim 9 additionally recites: generating control commands to control the system based on at least one of: a model-based control and estimation technique or an optimization-based control and estimation technique – Generally linking a judicial exception to the technological environment of adaptive control systems without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology) does not provide an inventive concept or significantly more to the recited abstract idea. Claim 10 additionally recites: generating control commands to control the system based on a data-driven based control and estimation technique – Generally link a judicial exception to the technological environment of adaptive control systems without providing anything more (e.g., specific details that would adequately reflect an improvement to conventional technology) does not provide an inventive concept or significantly more to the recited abstract idea. Claims 12, 14, 16, and 18-19 recite substantially similar additional elements to those recited in claims 2, 4, 6, and 8-9, and therefore also do not provide an inventive concept or significantly more to the recited abstract idea. Even when considered as an ordered combination, the claims ultimately do no more than generically place the claims within the context of adaptive control systems and/or computational fluid dynamics and invoke neural architecture as mere tools to perform abstract ideas; they thereby do not amount to significantly more than the recited abstract ideas themselves. As such, claims 2-10 and 12-19 also are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-12, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gin et al., (“Deep Learning Models for Global Coordinate Transformations that Linearize PDEs”, available arXiv 7 Nov 2019), hereinafter Gin, further in view of Klus et al., (“Data-driven approximation of the Koopman generator: Model reduction, system identification, and control”, available arXiv 13 Feb 2020), hereinafter Klus. Regarding claim 1, Gin teaches A computer-implemented method of training a neural network model for modeling an operation of a system having non-linear dynamics, wherein the non-linear dynamics are represented by partial differential equations (PDEs) (“We develop a deep autoencoder architecture that can be used to find a coordinate transformation which turns a nonlinear PDE into a linear PDE… We demonstrate our method on a number of examples, including the heat equation and Burgers equation, as well as the substantially more challenging Kuramoto-Sivashinsky equation, showing that our method provides a robust architecture for discovering interpretable, linearizing transforms for nonlinear PDEs” [Gin Abstract]), comprising: collecting a digital representation of time series data indicative of the operation of the system at different instances of time; (“We demonstrate our method on a number of examples, including the heat equation and Burgers equation, as well as the substantially more challenging Kuramoto-Sivashinsky equation, showing that our method provides a robust architecture for discovering interpretable, linearizing transforms for nonlinear PDEs” [Gin Abstract]; “The data for training the neural networks is created by performing numerical simulations of the given PDE. The initial conditions used and discretization details are described for each example below” [Gin page 8 Building Networks for Time-Stepping Dynamics]; “The training data consists of 8000 trajectories from the heat equation…The trajectories consist of 50 equally spaced time steps with Δt = 0:0025” [Gin page 9 Heat Equation]; “In all cases, the training data consists of 120,000 trajectories from Burgers’ equation, each with 51 equally spaced time steps with Δt = 0:002” [Gin page 12 Data]; “The data set mirrors the data used for Burgers’ equation. The training data consists of 120,000 trajectories, each with 51 equally spaced times steps” [Gin page 18 Kuramoto-Sivashinsky Equation]; Via simulations, a collection of training data (i.e., digital representation) is formed for a given PDE (e.g., heat equation, Burgers’ equation, Kuramoto-Sivashinsky equation) that represents the operation of a system via trajectories taken at equally spaced time steps (i.e., sequential data taken at different instances of time, thereby time series data)) and training the neural network model having an autoencoder architecture (“We develop a deep autoencoder architecture that can be used to find a coordinate transformation which turns a nonlinear PDE into a linear PDE” [Gin Abstract]; “In this work, we use the universal approximation properties of neural networks to find such linearizing coordinate transformations. The network architecture that we use is shown in Figure 4. The input of the the network uk is the state vector at time tk and the output is the state vector at time tk+1” [Gin page 6 Building Networks for Time-stepping Dynamics]; see network architecture in Figure 4 [Gin page 7]), wherein the autoencoder architecture includes an encoder configured to encode the digital representation into a latent space (“The network consists of three parts: (i) the encoder φ, (ii) the linear dynamics K, and (iii) the decoder φ-1…The encoder consists of the outer encoder χ+I and the inner encoder ψ. The outer encoder performs a coordinate transformation into a space in which the dynamics are linear. The inner encoder diagonalizes the system and/or reduces the dimensionality” [Gin page 6 Building Networks for Time-stepping Dynamics]; see χ outer encoder and ψ inner encoder in Figure 4 [Gin page 7]; The encoder is configured to reduce dimensionality of the input, i.e., encode higher-dimensional input data into a lower-dimensional latent space), a linear predictor configured to propagate the encoded digital representation into the latent space with linear transformation determined by values of parameters of the linear predictor (“The network consists of three parts: (i) the encoder φ, (ii) the linear dynamics K, and (iii) the decoder φ-1” [Gin page 6 Building Networks for Time-stepping Dynamics]; “r. The resulting dynamics are given by a Koopman operator matrix K” [Gin Abstract]; see deep autoencoder in Figure 1 – “Figure 1. A deep autoencoder is used to find coordinate transformations to linearize PDEs. The encoder finds a set of intrinsic coordinates for which the dynamics are linear. Then the dynamics are given by a matrix K. The decoder transforms back to the original coordinates. Multiple time step prediction can be performed by repeated multiplication by the matrix K in the intrinsic coordinates” [Gin page 2]; see network architecture in Figure 4 – K Linear takes output vk from encoder (after ψ inner encoder which reduces dimensionality, i.e., encodes data into latent space) and advances state forward in time to vk+1; The Koopman operator matrix K (i.e., linear predictor [see instant specification ¶ 0009]) is used to advance forward in time (i.e., propagate) encoder output via its parameters representing linearized dynamics of the system (i.e., linear transformation)), and a decoder configured to decode the linearly transformed encoded digital representation (“The network consists of three parts: (i) the encoder φ, (ii) the linear dynamics K, and (iii) the decoder φ-1… The inner decoder ψ-1 and the outer decoder ζ+I are the inverses of the inner and outer encoder, respectively.” [Gin page 6 Building Networks for Time-stepping Dynamics]; see deep autoencoder in Figure 1 – “Figure 1. A deep autoencoder is used to find coordinate transformations to linearize PDEs…The decoder transforms back to the original coordinates” [Gin page 2]; see ψ-1 inner decoder and ζ outer decoder (after K Linear) in Figure 4 [Gin page 7]) to minimize a loss function (“The loss function used to train the network is the sum of five different losses. They are depicted in Figure 5” [Gin page 7 Building Networks for Time-stepping Dynamics]; see depiction of loss functions in Figure 5 [Gin page 8]), wherein the loss function includes a prediction error between outputs of the neural network model decoding time series data indicative of the operation at an instant of time and time series data indicative of the operation at a subsequent instance of time (“Loss 2: prediction loss. The output of the network should accurately predict the state uk+1 when given the state at the previous time uk. The loss is given by PNG media_image1.png 33 247 media_image1.png Greyscale ” [Gin page 7 Building Networks for Time-stepping Dynamics]; see Loss 2: Prediction in Figure 5 [Gin page 8]; The prediction loss (i.e., error) term calculates loss between the predicted state uk+1 output by the network (φ-1(Kφ(uk))) when using uk (i.e., operation at an instant of time) as input, and the actual state uk+1 (i.e., operation at a subsequent instance of time)), and a residual factor of the PDEs having eigenvalues dependent on the parameters of the linear predictor (“Loss 3: linearity loss. The dynamics on the intrinsic coordinates should be linear. Therefore, we enforce a prediction loss within these coordinates: PNG media_image2.png 41 247 media_image2.png Greyscale ” [Gin page 8 Building Networks for Time-stepping Dynamics]; see Loss 3: Linearity in Figure 5 [Gin page 8]; “Because of its linearity, the behavior of the Koopman operator is completely determined by its eigenvalues and eigenfunctions. We use deep learning in order to approximate the Koopman eigenfunctions, which satisfy PNG media_image3.png 32 342 media_image3.png Greyscale “ [Gin page 6 Building Networks for Time-stepping Dynamics]; “The first PDE that we consider is the one-dimensional heat equation: PNG media_image4.png 48 313 media_image4.png Greyscale …For the heat equation, the discrete-time eigenvalues are PNG media_image5.png 38 164 media_image5.png Greyscale …because the high-frequency waves decay faster than the low-frequency waves, we expect the 21×21 matrix K to have the eigenvalues PNG media_image6.png 36 140 media_image6.png Greyscale , and therefore the eigenvalues satisfy PNG media_image7.png 37 222 media_image7.png Greyscale ” [Gin pages 9-10 Heat Equation]; The linearity loss term (i.e., residual factor) for the PDE (e.g., heat equation) is based on the Koopman operator matrix (i.e., linear predictor) K having linear dynamics, wherein behavior of the Koopman operator and its parameters is determined by its associated eigenfunctions and eigenvalues) wherein the residual factor of the PDE is based on a Lie operator ([Gin page 8 Building Networks for Time-stepping Dynamics] and see Loss 3: Linearity in Figure 5 [Gin page 8] and [Gin page 6 Building Networks for Time-stepping Dynamics] and [Gin pages 9-10 Heat Equation], as detailed above; The linearity loss term (i.e., residual factor) for the PDE (e.g., heat equation) is based on the Koopman operator, which is in turn based on the Koopman generator which generates the operator over time. As per the instant specification (“an infinitesimal generator L of the Koopman operator family may be defined as: (equation 6)…The generator L is sometimes referred to as a Lie operator” [¶ 0065-0067]), the examiner has interpreted the term Lie operator as encompassing a Koopman generator (i.e., generator of a Koopman operator), and thereby related to the Koopman operator family) However, Gin does not expressly teach controlling an operation of a system based on the determined predictions or using time series data that is indicative of measurements of the operation of a system to train the neural network model, or performing eigen-decomposition to a Lie operator. In the same field of endeavor, Klus teaches a method of applying Koopman operator theory to high-dimensional systems (“We derive a data-driven method for the approximation of the Koopman generator called gEDMD, which can be regarded as a straightforward extension of EDMD (extended dynamic mode decomposition)… Moreover, we apply gEDMD to derive coarse-grained models of high-dimensional systems, and also to determine efficient model predictive control strategies” [Klus Abstract]) which further recites controlling an operation of a system based on the determined predictions (“we apply gEDMD to derive coarse-grained models of high-dimensional systems, and also to determine efficient model predictive control strategies” [Klus Abstract]; “The predictive capabilities of the Koopman operator have also raised interest in the control community, where the aim is to determine a system input u such that the non-autonomous control system ˙x = b(x, u) behaves in a desired way, which results in the following control problem: [equation 14]… In order to achieve a feedback behavior, problem (14) is embedded into a model predictive control (MPC) [52] scheme, where it has to be solved repeatedly over a relatively short horizon while the system (the plant) is running at the same time. The first part [t0, t0 +h] of the optimal control u is then applied to the plant, and (14) has to be solved again on a shifted horizon [t0 + h, te + h]” [Klus pages 22-23 Control]) and uses time series data that is indicative of measurements of the operation of a system (“We now assume that we have m measurements of the states of the system, given by {xl} m l=1, and the corresponding time derivatives, given by {x˙l} m l=1. The derivatives might also be estimated from data, cf. [5]” [Klus page 6 Deterministic dynamical systems]; The method utilizes measurements of the states of a system at different instances of time (i.e., time series data)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated controlling an operation of a system and using time series data that is indicative of measurements of the operation of a system as taught by Klus into Gin because they are both directed towards applying Koopman operator theory to high-dimensional systems. Gin already teaches that although the disclosed demonstrations (e.g., heat equation, Burgers’ equation, Kuramoto-Sivashinsky equation) are performed using simulated data, the disclosed data-driven approach can also be applied to experimental data (e.g., snapshots) (“Note that our approach is completely data driven - no knowledge of the underlying equations is needed. Therefore, it can be used for experimental data for which the governing equations are unknown” [Gin page 9 Building Networks for Time-stepping Dynamics]). Gin also already teaches that the disclosed approach would have clear applications to control systems (“The ability to embed nonlinear systems in a linear framework is particularly useful for estimation and control, where a wealth of techniques exist for linear systems. Therefore, it will likely be fruitful to extend these approaches to include inputs and control” [Gin page 21 Conclusion]). Therefore, a person of ordinary skill in the art would recognize the value of incorporating the teachings of Klus to enable further applications of the approach of Gin to training on measurement data of a real-time system for purposes of predictive control strategy. Klus further teaches performing eigen-decomposition to a Lie operator (“The purpose of this study is to present a general framework to compute a matrix approximation of the Koopman generator, both for deterministic and stochastic systems, and to explore a range of applications…1. We reformulate standard EDMD in such a way that it can be used to approximate the generator of the Koopman operator—as well as its eigenvalues, eigenfunctions, and modes—from data without resorting to trajectory integration” [Klus page 2 Introduction]; The disclosed method determines eigenvalues and eigenfunctions (i.e., performs eigen-decomposition) with respect to a Koopman generator (i.e., Lie operator [see instant specification ¶ 0065-0067])) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated performing eigen-decomposition to a Lie operator as taught by Klus into Gin because they are both directed towards applying Koopman operator theory to high-dimensional systems. Incorporating the teachings of Klus into Gin to determine an approximation of the Koopman generator would improve applicability of the combination to control systems, via removing drawbacks of large lag times and increasing suitability for time optimization approaches (“Regardless of the approach, a drawback of Koopman operator based surrogate models is that the control freedom is limited by the finite lag time. While larger lag times are often beneficial for the approximation of the dynamics, this is counterproductive for control, as the control frequency is strongly limited. This issue is overcome by the generator approach (15) since we can choose arbitrary time steps here, and results on mixed integer optimal control problems (see, e.g., [57]) suggest that fast switches allow for solutions of any desired accuracy. Moreover, the continuous-time generator model is much better suited for switching time optimization approaches” [Klus page 24 Control]). Regarding claim 2, the combination of Gin and Klus teaches the limitations of parent claim 1, and Klus further teaches controlling the system by using a linear control law including a control matrix formed by the values of the parameters of the linear predictor (“In what follows, we will focus mainly on the generator of the Koopman operator and its properties and applications…The purpose of this study is to present a general framework to compute a matrix approximation of the Koopman generator, both for deterministic and stochastic systems, and to explore a range of applications” [Klus page 2 Introduction]; “The Koopman lifting technique [22, 23] uses the infinitesimal generator L for system identification…. First, the Koopman operator for a fixed lag time τ is estimated from trajectory data with the aid of standard EDMD. Then an approximation of the generator is obtained by taking the matrix logarithm, i.e., PNG media_image8.png 37 136 media_image8.png Greyscale where K^τ is the matrix representation of the Koopman operator with respect to the chosen basis ψ (and lag time τ). The last step is to estimate the governing equations in the same way as illustrated in Example 3.3 for gEDMD” [Klus page 16 Koopman lifting technique]; “Since the real-time requirements in MPC are often very hard to satisfy, a promising approach is to replace the system dynamics by a surrogate model, and one possibility is to use the Koopman operator or its generator for prediction. Introducing the variable PNG media_image9.png 22 112 media_image9.png Greyscale , we obtain a linear system via the approximation L of the generator: PNG media_image10.png 30 121 media_image10.png Greyscale ” [Klus pages 23-24 Control]; Via the Koopman lifting technique, a Koopman generator matrix approximation (i.e., control matrix) can be formed from the Koopman operator matrix representation (e.g., linear predictor of Gin) and its parameters, wherein the Koopman generator can then be used for predictive control of the system via linear approximation PNG media_image10.png 30 121 media_image10.png Greyscale (i.e., linear control law)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated controlling the system by using a linear control law including a control matrix formed by the values of the parameters of the linear predictor as taught by Klus into Gin because they are both directed towards applying Koopman operator theory to high-dimensional systems. Incorporating the teachings of Klus into Gin to determine an approximation of the Koopman generator would improve applicability of the combination to control systems, via removing drawbacks of large lag times and increasing suitability for time optimization approaches (“Regardless of the approach, a drawback of Koopman operator based surrogate models is that the control freedom is limited by the finite lag time. While larger lag times are often beneficial for the approximation of the dynamics, this is counterproductive for control, as the control frequency is strongly limited. This issue is overcome by the generator approach (15) since we can choose arbitrary time steps here, and results on mixed integer optimal control problems (see, e.g., [57]) suggest that fast switches allow for solutions of any desired accuracy. Moreover, the continuous-time generator model is much better suited for switching time optimization approaches” [Klus page 24 Control]). Regarding claim 5, the combination of Gin and Klus teaches the limitations of parent claim 1, and Gin further teaches wherein the linear predictor is based on a reduced-order model, wherein the reduced-order model is represented by a Koopman operator (“Although the Koopman operator acts on an infinite-dimensional space, we can obtain a finite-dimensional approximation by considering the space spanned by finitely many Koopman eigenfunctions. Acting on this space, the Koopman operator is just a matrix. Therefore, Koopman operator theory provides an approach to find an intrinsic coordinate system in which the dynamical system has linear dynamics” [Gin page 6 Building Networks for Time-stepping Dynamics]; By linearizing nonlinear dynamical systems, the Koopman operator (i.e., linear predictor) performs model order reduction by reducing high-dimensional (and possibly infinite-dimensional) dynamical systems to their key dynamical features) Regarding claim 6, the combination of Gin and Klus teaches the limitations of parent claim 5, and Gin further teaches approximating the Koopman operator by use of a data-driven approximation technique (“We use deep learning in order to approximate the Koopman eigenfunctions” [Gin page 6 Building Networks for Time-stepping Dynamics]; “The data for training the neural networks is created by performing numerical simulations of the given PDE…Note that our approach is completely data driven – no knowledge of the underlying equations is needed. Therefore, it can be used for experimental data for which the governing equations are unknown” [Gin page 9 Building Networks for Time-stepping Dynamics]; Training the neural network model (i.e., autoencoder architecture), including approximating parameters of Koopman operator matrix K, is data-driven). Klus further teaches a data-driven approximation technique that is generated using numerical or experimental snapshots (“As a more complex example, we derive a coarse-grained model from molecular dynamics simulations of alanine dipeptide, which has been used as a test case in numerous previous studies. The data set is the same as in reference [51] and comprises one million snapshots of Langevin dynamics saved every 1 ps” [Klus page 21 Example 2: Alanine dipeptide]). Regarding claim 7, the combination of Gin and Klus teaches the limitations of parent claim 5, and Gin further teaches approximating the Koopman operator by use of a deep learning technique (“We use deep learning in order to approximate the Koopman eigenfunctions” [Gin page 6 Building Networks for Time-stepping Dynamics]) Regarding claim 8, the combination of Gin and Klus teaches the limitations of parent claim 1, and Gin further teaches generating collocation points associated with a function space of the system, wherein the generating is based on the PDEs, the digital representation of time series data and the linearly transformed encoded digital representation; (see Figure 1 – “Figure 1. A deep autoencoder is used to find coordinate transformations to linearize PDEs. The encoder finds a set of intrinsic coordinates for which the dynamics are linear. Then the dynamics are given by a matrix K. The decoder transforms back to the original coordinates.” [Gin page 2]; “Loss 1: autoencoder loss. We want an invertible transformation between the state space and intrinsic coordinates for which the dynamics are linear“ [Gin page 7 Building Networks for Time-stepping Dynamics]; The autoencoder architecture uses the input data (i.e., digital representation of time series data) to determine intrinsic coordinates (i.e., collocation points) within the state space (i.e., function space) of the system for which the associated PDE dynamics become linear) and training the neural network model based on the generated collocation points. (“The loss function used to train the network is the sum of five different losses…Loss 1: autoencoder loss. We want an invertible transformation between the state space and intrinsic coordinates for which the dynamics are linear. The transformation into the intrinsic coordinates is given by the encoder φ and the transformation back into the state space is given by the decoder φ −1 . Therefore, we wish for the autoencoder φ−1 ◦ φ to reconstruct the inputs of the network as closely as possible. This loss is given by PNG media_image11.png 33 202 media_image11.png Greyscale ” [Gin page 7 Building Networks for Time-stepping Dynamics]). Regarding claim 9, the combination of Gin and Klus teaches the limitations of parent claim 1, and Klus further teaches generating control commands to control the system based on at least one of: a model-based control and estimation technique or an optimization-based control and estimation technique (“The predictive capabilities of the Koopman operator have also raised interest in the control community, where the aim is to determine a system input u such that the non-autonomous control system ˙x = b(x, u) behaves in a desired way, which results in the following control problem: [equation 14]…In order to achieve a feedback behavior, problem (14) is embedded into a model predictive control (MPC) [52] scheme, where it has to be solved repeatedly over a relatively short horizon while the system (the plant) is running at the same time” [Klus pages 22-23 Control]; In light of the instant specification (“In some embodiments, the control unit 208 may be configured to generate the control commands for controlling the system 204 based on at least one of a model-based control and estimation technique or an optimization-based control and estimation technique, for example, a model predictive control (MPC) technique” [¶ 0080]), the examiner has interpreted model-based and optimization-based control and estimation techniques as encompassing model predictive control (MPC) techniques. The disclosed method determines system input (i.e., generates control commands) to make the control system behave in a desired way using an MPC technique) Regarding claim 10, the combination of Gin and Klus teaches the limitations of parent claim 1, and Klus further teaches generating control commands to control the system based on a data-driven based control and estimation technique ([Klus pages 22-23 Control], as detailed above in claim 9; “Since the real-time requirements in MPC are often very hard to satisfy, a promising approach is to replace the system dynamics by a surrogate model, and one possibility is to use the Koopman operator or its generator for prediction” [Klus page 23 Control]; In light of the instant specification (“Typically, use of the operational data to design the control policies or the control commands is referred as the data-driven based control and estimation technique” [¶ 0082]), the examiner has interpreted a data-driven based control and estimation technique as encompassing any technique that uses operational data to control the system. To determine appropriate system input (i.e., control commands) to the control system, the disclosed method models (i.e., estimates) system dynamics via Koopman operator/generator approximation, which are further determined using input time series data (see [Klus page 6 Deterministic dynamical systems], as detailed above in claim 1) which measure states of the system (i.e., system operations over time)) Regarding claim 11, it is a system/apparatus claim that corresponds to the method of claim 1, which is already taught by the combination of Gin and Klus. Gin further teaches A training system, the training system comprising at least one processor; and a memory having instructions stored thereon that, when executed by the at least one processor, cause the training system to: perform the disclosed functions (“We demonstrate our method on a number of examples, including the heat equation and Burgers equation, as well as the substantially more challenging Kuramoto-Sivashinsky equation, showing that our method provides a robust architecture for discovering interpretable, linearizing transforms for nonlinear PDEs” [Gin Abstract]; “The training data consists of 8000 trajectories from the heat equation” [Gin page 9 Heat Equation]; “In all cases, the training data consists of 120,000 trajectories from Burgers’ equation” [Gin page 12 Data]; “The data set mirrors the data used for Burgers’ equation. The training data consists of 120,000 trajectories, each with 51 equally spaced times steps” [Gin page 18 Kuramoto-Sivashinsky Equation]; Training the autoencoder architecture on datasets comprising hundreds of thousands of trajectories inherently requires a computer with adequate processing (i.e., at least one processor) and storage (i.e., memory) capabilities to perform the disclosed functions). Consequently, claim 11 is rejected for the same reasons as claim 1. Regarding claim 20, it is a product claim that corresponds to the method of claim 1, which is already taught by the combination of Gin and Klus. Gin further teaches A non-transitory computer readable storage medium embodied thereon a program executable by a processor for performing a method, the method comprising: the disclosed functions ([Gin Abstract] and [Gin page 9 Heat Equation] and [Gin page 12 Data] and [Gin page 18 Kuramoto-Sivashinsky Equation], as detailed above in claim 11; Training the autoencoder architecture on datasets comprising hundreds of thousands of trajectories inherently requires a computer with adequate processing and storage capabilities (i.e., a processor coupled to a storage medium) for performing the disclosed functions). Consequently, claim 20 is rejected for the same reasons as claim 1. Regarding claims 12 and 15-19, they recite substantially similar limitations to those recited in claims 2 and 5-9, which are already taught by the combination of Gin and Klus. Consequently, claims 12 and 15-19 are rejected for the same reasons as claims 2-3 and 5-9. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Gin and Klus, as applied to claims 1 and 11 above, further in view of Brunton et al., (“Modern Koopman Theory for Dynamical Systems”, available arXiv 29 Oct 2021), hereinafter Brunton. Regarding claim 4, the combination of Gin and Klus teaches the limitations of parent claim 1, and Klus further teaches wherein the digital representation of the time series data is obtained by use of simulation or experiments (“As a more complex example, we derive a coarse-grained model from molecular dynamics simulations of alanine dipeptide, which has been used as a test case in numerous previous studies. The data set is the same as in reference [51] and comprises one million snapshots of Langevin dynamics saved every 1 ps” [Klus page 21 Example 2: Alanine dipeptide]). However, the combination does not explicitly teach wherein the digital representation of the time series data is obtained by use of computational fluid dynamics (CFD) simulation or experiments. In the same field of endeavor, Brunton teaches a method of applying Koopman operator theory to high-dimensional systems (“Koopman spectral theory has emerged as a dominant perspective over the past decade, in which nonlinear dynamics are represented in terms of an infinite-dimensional linear operator acting on the space of all possible measurement functions of the system. This linear representation of nonlinear dynamics has tremendous potential to enable the prediction, estimation, and control of nonlinear systems with standard textbook methods developed for linear systems…In this review, we provide an overview of modern Koopman operator theory, describing recent theoretical and algorithmic developments and highlighting these methods with a diverse range of applications” [Brunton Abstract]) wherein the digital representation of the time series data is obtained by use of computational fluid dynamics (CFD) simulation or experiments (“Dynamic mode decomposition, originally introduced by Schmid [380, 379] in the fluid dynamics community, has rapidly become the standard algorithm to approximate the Koopman operator from data [366, 433, 227]... The DMD algorithm was originally developed to identify spatio-temporal coherent structures from high-dimensional time-series data, as are commonly found in fluid dynamics.” [Brunton page 21 Dynamic mode decomposition]; “The DMD algorithm is purely data-driven, and is thus equally applicable to experimental and numerical data” [Brunton page 25 Alternative optimizations for DMD]; “DMD originated in the fluid dynamics community [379], and has since been applied to a wide range of flow geometries (jets, cavity flow, wakes, channel flow, boundary layers, etc.), to study mixing, acoustics, and combustion, among other phenomena” [Brunton page 29 Fluid dynamics]; see Figures 3.2 to 3.4 – fluid dynamics examples [Brunton pages 30-32]; The disclosure teaches that Koopman operator approximation can be performed using computational processes (e.g., dynamic mode decomposition) performed on experimental, high-dimensional time-series data, such as that found in the field of fluid dynamics) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated obtaining the digital representation of the time series data by use of computational fluid dynamics (CFD) simulation or experiments as taught by Brunton into the combination of Gin and Klus because Gin, Klus, and Brunton are all directed towards methods of applying Koopman operator theory to high-dimensional systems. Given recognition in the art that dynamic mode decomposition (DMD), which is strongly correlated to Koopman operator theory (“The connection between DMD and the Koopman operator [366, 433, 227] has motivated several extensions for strongly nonlinear systems” [Brunton page 28 Nonlinear measurements and latent variables]; “several approaches have been proposed to extend DMD, including with nonlinear measurements…It is expected that neural network representations of dynamical systems, and Koopman embeddings in particular, will remain a growing area of interest in data-driven dynamics” [Brunton page 90 Discussion and outlook]), has applications to a wide variety of fields including fluid dynamics, (“Algorithms such as DMD [1, 2], EDMD [3, 4], SINDy [5], and their various kernel- [3, 6, 7], tensor- [8, 9, 10], or neural network-based [11, 12, 13] extensions and generalizations have been successfully applied to a plethora of different problems, including molecular and fluid dynamics” [Klus page 1 Introduction), a person of ordinary skill in the art would recognize the value of incorporating the teachings of Brunton to enable applicability of the combination to fluid dynamics simulations/experiments as a particular field of use. Regarding claim 14, it recites substantially similar limitations to those recited in claims 4, which are already taught by the combination of Gin, Klus and Brunton. Consequently, claim 14 is rejected for the same reasons as claim 4. Response to Arguments The remarks filed 11/25/2025 have been fully considered. Applicant’s remarks [Remarks pages 10-11] traversing the non-eligible subject matter rejections under 35 U.S.C. 101 set forth in the office action mailed 08/27/2025, in view of claims 1-2, 4-12, and 14-20 as amended, have been considered but are not persuasive. Applicant alleges that the claimed invention is directed towards a specific technical improvement in training a neural network for controlling an operation of system having non-linear dynamics. The examiner respectfully disagrees. Applicant is directed towards the grounds of rejection under 35 U.S.C. 101 with respect to amended claims 1-2, 4-12, and 14-20 set forth above. Applicant’s arguments are further summarized and addressed below. Applicant argues, citing to [¶ 0132] of the specification, that the claimed invention’s basic character is technical in nature due to physics-informed terms being integrated into the loss function of the neural network model, thereby alleviating the need for a large amount of data by assimilating knowledge of the differential equations into the training process. In response, the examiner notes that for a finding of eligibility based on improvements to the functioning of a computer, technology, or technical field, the judicial exception alone cannot provide the improvement (see MPEP § 2106.05(a)). As currently recited, the claimed “residual factor of the PDEs” term within the loss function “having eigenvalues dependent on parameters of the linear predictor”, when interpreted under a broadest reasonable interpretation, amounts to no more than reciting abstract steps of modeling observed dynamics through mathematical relationships and mathematical calculation within an optimization procedure. As such, the claimed improvement of “alleviating the need for a large amount of data” in the training process amounts to no more than merely claiming the improved efficiency inherent to implementing an abstract idea on a neural network (see MPEP § 2106.05(f)(2)), rather than apparent improvement to functioning of the underlying model itself. Nevertheless, even if the limitations at issue are interpreted as reciting additional elements beyond an abstract concept, it is noted that physics-informed neural networks (PINNs), i.e., exploiting physics laws to guide the optimization of DNNs, are indeed conventional in the art and widely understood to be applicable to a variety of fields (see Huang, “Applications of Physics-Informed Neural Networks in Power Systems – A Review”, included in Notice of References Cited mailed 08/27/2025, [Abstract and pages 572-573 Introduction]). As such, it is not apparent from applicant’s argument, the cited section of the specification, or the limitations of the claims themselves, how the claimed invention improves upon conventional standards within the field of training physics-informed neural network models. Applicant argues that the claimed training method is analogous to the training method of example 39 of the subject matter eligibility examples provided by the USPTO. In response, the examiner respectfully disagrees with the assertion that the claimed method at issue is “analogous” to example 39, which was found to be eligible at Step 2A Prong 1 due to not reciting a judicial exception. In contrast, the claims at issue do indeed recite judicial exceptions, as identified in the Step 2A Prong 1 analysis above. Applicant argues that the prior office action did not reject dependent claim 3 (whose limitations have since been incorporated into the amended independent claims) under Step 2A Prong 2 or Step 2B, and that the amended independent claims are therefore eligible. In response, the examiner respectfully notes that applicant has misconstrued the analysis of the claims. As is in line with 2019 PEG guidance, analysis under Step 2A Prong 2 and Step 2B is particularly directed towards any additional elements found to be further recited in the claims beyond the finding of a judicial exception in Step 2A Prong 1. It was explicitly found in the analysis of dependent claim 3, given that its limitations were entirely addressed in Step 2A Prong 1, that the claim at issue did not recite any further additional elements beyond the judicial exception (besides those already recited and addressed in analysis of parent claim 1). As such, analysis did not continue in Step 2A Prong 2 or Step 2B and the claim was found ineligible. Applicant has not presented further arguments with respect to the dependent claims. As such, amended claims 1-2, 4-12, and 14-20 stand rejected under 35 U.S.C. 101. Applicant’s remarks [Remarks pages 11-14] traversing the obviousness rejections under 35 U.S.C. 103 set forth in the office action mailed 08/27/2025, in view of claims 1-2, 4-12, and 14-20 as amended, have been considered but are not persuasive. Applicant alleges that neither Gin nor Klus either alone or in combination disclose the features of amended independent claim 1. The examiner respectfully disagrees. Applicant is directed towards the grounds of rejection under 35 U.S.C. 103 with respect to amended claims 1-2, 4-12, and 14-20 set forth above. Applicant’s arguments are further summarized and addressed below. Applicant argues that the Lie operator L is a subset or special case of the Koopman operator family based on “sufficiently smooth” dynamics, i.e., a species of the genus Koopman operator, and therefore not to be assumed via disclosure of the Koopman operator. It is thereby argued that the cited combination of references do not teach or provide motivation for the residual factor of the PDE [being] based on the Lie operator, and that Klus does not teach or suggest the Lie operator. In response, the examiner maintains the assertion that interpretation of the claim term “Lie operator” as being related to the Koopman operator family, and further analogous to the Koopman generator disclosed in Klus, is indeed appropriate. The disclosed teachings are supportive of a finding of obviousness, particularly upon consideration of what would be known to one of ordinary skill in the art based upon 1) the combination of references and 2) additional relevant sources at the time. Not only does the specification acknowledge the apparent correspondence (“A squared matrix is used to approximate the Lie operator, which in turn is related to the Koopman operator generator” [¶ 0014]; “The generator L is sometimes referred to as a Lie operator. For example, the generator L is a Lie derivative of the function g along the vector field f(x) when the dynamics is given by dx/dt=f(x)” [¶ 0066]), but several additional references of relevance in the art also support this finding, as detailed below: See Brunton et al., “Modern Koopman Theory for Dynamical Systems”, included in Notice of References Cited mailed 08/27/2025 – “The generator L has been called the Lie operator [216], as it is the Lie derivative of g along the vector field f (x) when the dynamics is given by (1.1)” [page 11] See Gadginmath et al., “Data-Driven Feedback Linearization using the Koopman Generator” – “On the space C1(X) of continuously differentiable functions on X, the following holds:…The above is the sense in which the Lie derivative operator is the Koopman generator, or more precisely, the infinitesimal generator of the Koopman operator” [page 2] See Schulze et al., “Identification of MIMO Wiener-type Koopman Models for Data-Driven Model Reduction using Deep Learning” – “The generator Lf : F –> F, also referred to as Lie operator, belongs to the Koopman operator family and induces linear dynamics” [page 4] While there is nothing in Gin to suggest that the disclosed types of dynamical systems being modeled by ordinary partial differential equations (PDEs) across continuous intervals (e.g., [page 9 Heat Equation], [page 12 Burger’s Equation], [page 17 Kuramoto-Sivashinsky Equation]) would not have “sufficiently smooth” dynamics such that the generator of Koopman operator would be considered the Lie operator, it is also noted that the teachings of the disclosure are not limited to the particular dynamical systems that were explicitly considered [Gin pages 20-21 Conclusion]. Additionally, for any “deterministic dynamical system” defined by an “ordinary differential equation”, Klus defines “the infinitesimal generator L” of Koopman operators in such a manner that corresponds with the teachings of the above references that explicitly reference the Lie operator [Klus page 3 Deterministic dynamical systems]. Ultimately, there is clear basis to establish that a person of ordinary skill in the art would be able to 1) recognize the known and established relationship between Koopman operator theory and the Lie operator, 2) be motivated to combine teachings of Gin and Klus (based on the recognized advantages of leveraging generator-based models over operator-based models for control systems ([Klus page 24 Control] as detailed above)) to thereby explicitly approximate the Koopman generator and 3) apply necessary inferences and creative steps to train the given model on a dynamical system with “sufficiently smooth dynamics” such that the Koopman generator would be defined by the Lie operator. Applicant has not presented further arguments with respect to the dependent claims. As such, amended claims 1-2, 4-12, and 14-20 stand rejected under 35 U.S.C. 103. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gadginmath et al. (“Data-Driven Feedback Linearization using the Koopman Generator”, available arXiv 10 Oct 2022) discloses a theoretical framework for data-driven feedback linearization of nonlinear control-affine systems. Schulze et al. (“Identification of MIMO Wiener-type Koopman Models for Data-Driven Model Reduction using Deep Learning”, available arXiv 4 Apr 2022) discloses a Koopman deep-learning strategy combining autoencoders and linear dynamics that generates low-order surrogate models of MIMO Wiener type. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIJAY M BALAKRISHNAN whose telephone number is (571) 272-0455. The examiner can normally be reached 10am-5pm EST Mon-Thurs. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER WELCH can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /V.M.B./ Examiner, Art Unit 2143 /JENNIFER N WELCH/ Supervisory Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Nov 02, 2022
Application Filed
Aug 21, 2025
Non-Final Rejection — §101, §103, §112
Nov 25, 2025
Response Filed
Feb 04, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585912
GATED LINEAR CONTEXTUAL BANDITS
2y 5m to grant Granted Mar 24, 2026
Patent 12468967
METHOD AND SYSTEM FOR GENERATING A SOCIO-TECHNICAL DECISION IN RESPONSE TO AN EVENT
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
43%
Grant Probability
99%
With Interview (+85.7%)
3y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month