Prosecution Insights
Last updated: April 19, 2026
Application No. 17/969,721

HYPER NETWORK MACHINE LEARNING ARCHITECTURE FOR SIMULATING PHYSICAL SYSTEMS

Final Rejection §101§103
Filed
Oct 20, 2022
Examiner
KAPOOR, DEVAN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Laboratories Europe GmbH
OA Round
2 (Final)
11%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
1 granted / 9 resolved
-43.9% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
38.1%
-1.9% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to the application filed on 10/24/2025. Claims 1-20 are pending and have been examined. This action is Final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Response to Arguments Argument 1: The applicant argues that claims 1-15 are not directed to an abstract idea under 101 and are patent-eligible because they recite a concrete, practical application of machine learning technology for simulating physical systems using a hyper network and a main network. The applicant contends that the Office has not met its burden under the 2019 Eligibility Guidance MPEP 2106, asserting that the claims are not directed to a mental process, mathematical concept, or method of organizing human activity, and that training and using machine learning models cannot be practically performed in the human mind. In particular, the applicant relies on USPTO Subject Matter Eligibility Example 39 and the August USPTO Memorandum to argue that training a neural network, including a hyper network configured to generate parameters for a main network, does not recite a judicial exception and should not be characterized as a mental process or mathematical calculation merely because mathematics may underlie the implementation. The applicant further asserts that the claims do not recite any mathematical relationships, formulas, or calculations in the claim language itself, and that any mathematical concepts described in the specification but not recited in the claims cannot form the basis of a 101 rejection. Additionally, the applicant argues that, even if the claims were viewed as involving an abstract idea under Step 2A Prong One, they integrate any such idea into a practical application under Prong Two by solving technical problems in the field of physical system simulation, such as long simulation times, limited adaptability, and poor compatibility with specialized hardware like GPUs, through a specific hyper network-based architecture that improves simulation accuracy, speed, and hardware efficiency. The applicant also maintains that the claims recite significantly more under Step 2B because they include non-conventional features and combinations that go beyond what is well-understood, routine, or conventional, and because the cited prior art fails to disclose or suggest the claimed features. Accordingly, the applicant requests withdrawal from the rejection. Examiner Response to Argument 1: The examiner has considered the argument set forth above, however, the argument is not persuasive because, when the claims are evaluated as a whole in light of the claim mappings previously set forth, the claims remain directed to judicial exceptions and are not integrated into a practical application nor do they recite significantly more. Although the applicant asserts that the claims are not directed to a mental process or mathematical concept and relies on Example 39 and the August USPTO Memorandum, the independent claims expressly recite training a hyper network to learn an external parameter space, generating main network parameters by inputting external parameters from that learned space, and using Fourier layers that operate in spatial and frequency domains, all of which were mapped to mathematical operations such as spectral transforms, parameterized linear operators, optimization routines, and data transformations. As shown in the claim mapping, limitations such as learning an external parameter space, updating parameters using stochastic gradient descent until a loss threshold is reached, adapting Fourier layers based on learned parameters, using Taylor expansions, bilevel optimization, and Fast Fourier Transforms constitute mathematical concepts and algorithmic data processing steps recited in the claim language itself, rather than merely underlying implementation details. Further, the additional limitations relied upon by the applicant, including generating a main network after training, simulating a physical system, receiving system or external parameters, instantiating the network on hardware, or executing the method using GPUs, were mapped as generic computer implementation or post-solution activity that merely applies the abstract idea in a particular technological environment and therefore do not integrate the judicial exception into a practical application under Step 2A Prong Two. Unlike Example 39, where the claims were found not to recite a judicial exception, the instant claims explicitly recite the mathematical mechanisms by which the results are achieved, and the asserted improvements in accuracy, speed, or hardware compatibility are described only at a high level without being reflected in specific claim limitations that improve the functioning of a computer itself. Finally, as demonstrated by the mapping, the claimed features represent well-understood, routine, and conventional techniques in machine learning, including training neural networks, generating and updating parameters, and applying spectral and spatial transformations, and therefore do not provide an inventive concept under Step 2B. Accordingly, the rejection of the claims under 35 U.S.C. 101 is maintained. Argument 2: The applicant argues that the rejections under 35 U.S.C. 102 and 103 are improper because the cited prior art fails to disclose or suggest the amended limitations of the independent claims, particularly the use of a hyper network that is trained to learn an external parameter space and to generate main network parameters based on inputting external parameters from that learned space. The applicant contends that Li discloses only a single neural network, namely a Fourier neural operator used to obtain solutions to parametric partial differential equations, and does not teach or suggest a second neural network that generates parameters for a main network based on external parameters, nor learning or using an external parameter space as now claimed. The applicant further asserts that, even assuming arguendo that Li’s Fourier neural operator could be viewed as a hyper network, Li still fails to disclose generating main network parameters for a separate main network based on input external parameters, or learning external parameters such as physical system coefficients as a distinct parameter space. The applicant also argues that Li’s disclosure of infinite-dimensional function spaces or spatial domains does not correspond to the claimed external parameter space, and therefore does not meet the amended claim limitations. Additionally, the applicant maintains that the secondary references, Balduzzi and Xie, do not cure these deficiencies, as Balduzzi relates to Taylor-based loss approximations rather than parameter generation for adapting a network to new physical systems, and Xie relates to neural fields used for representing visual information rather than neural operators or hyper networks for physical system simulation. According to the applicant, neither Balduzzi nor Xie, alone or in combination with Li, disclose or suggest the claimed features, including generating main network parameters from a learned external parameter space or the specific adaptations recited in the dependent claims. The applicant further argues that the newly added claims 16-20 recite additional features that further distinguish over the cited art and are therefore allowable for at least the same reasons as claim 1. Accordingly, the applicant requests withdrawal of the 102 and 103 rejections. Examiner Response to Argument 2: The examiner has considered the argument set forth above, however, the argument is not persuasive because, as explained in the claim mappings previously set forth, all of the pending claims are rejected under 35 U.S.C. 103, rather than under 35 U.S.C. 102, and therefore are evaluated based on whether the cited references, alone or in combination, render the claimed subject matter obvious as a whole. In particular, the amendments introduced into the independent claims, including the limitations directed to learning an external parameter space and generating main network parameters by inputting external parameters from that learned space, necessitated reliance on additional references, such as Ha, to address those features, and the claims are therefore properly treated under an obviousness framework rather than anticipation. As mapped, Li teaches a learned operator framework in which system-dependent inputs are used to determine operator parameters, including defining Rφ as a parametric function that maps system-related coefficients to Fourier-mode weights, while Ha expressly teaches learning an external parameter space and using learned embeddings or latent variables as inputs to a hypernetwork that generates the weights of another network. Thus, the applicant’s assertion that the prior art fails to disclose or suggest the amended limitations is inconsistent with the mapped teachings of Li in view of Ha, which together address the amended claim language. The applicant’s argument that Li discloses only a single neural network and does not teach a hypernetwork generating parameters for a main network overlooks the obviousness analysis applied here, which does not require a single reference to disclose all elements, but permits combining teachings where there is a reasonable expectation of success. Additionally, the mappings show that the secondary references were applied to specific dependent claim elements, such as Balduzzi for Taylor-based updates and Xie for conditioning on latent variables, training with stochastic gradient descent, and generating parameters for downstream networks, further supporting the obviousness rejections. The newly added claims 16-20 likewise do not overcome the rejections, as the mappings demonstrate that Li teaches the recited Fourier-layer structures, spatial and frequency components, one-dimensional operation, FFT-based processing, and stacked operator layers, while Ha provides the hypernetwork-based parameter generation inherited from the amended independent claims. Accordingly, when the claims are considered as a whole under the proper 103 standard and in view of the mapped disclosures and articulated motivations to combine, the applicant’s arguments do not rebut the prima facie case of obviousness, and the rejections under 35 U.S.C. 103 are maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1: Claim 1 is directed to a method, which is a statutory process. Step 1 is satisfied. Step 2A Prong 1: “training a hyper network configured to generate main network parameters for a main network, wherein training the hyper network comprises learning an external parameter space” -- The limitation is directed to learning mathematical relationships and forming a mathematical representation of an external parameter space. This limitation is directed to a mathematical concept, and thus is considered math. Step 2A Prong 2 and Step 2B: “A computer-implemented method for operating a hyper network machine learning system, the method comprising:… and wherein generating the main network parameters is based on inputting external parameters from the learned external parameter space into the trained hyper network;” -- The limitation recites generating the main network parameters by inputting the external parameters from the learned parameter space to the trained hyper network. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under step 2B, generating parameters based on inputted/outputted (gathered) parameters from the parameter space to the trained network is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). “generating, using the trained hyper network, the main network with the main network parameters, the main network having a machine learning architecture that models a spatial domain and a frequency domain to simulate a physical system.” -- The limitation recites generating the main network with the main network parameters using the trained hyper network, and that the main network will have a ML architecture that will model the spatial/frequency domain to simulate a “physical system”. The limitation amounts to no more than mere instructions to apply onto a computer, and it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 1 is non-patent eligible. Claim 15 is analogous to claim 1, aside from claim type and the following limitation. “…comprising memory and one or more hardware processors which, alone or in combination, are configured to provide for execution a method comprising:” (mere instructions to apply anyways and would be rejected as such (see MPEP 2106.05(f))), and thus the same rejection above applies to 15 as well. Regarding claim 2, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: “The method of claim 1, wherein the main network has a Fourier neural operator architecture comprising a plurality of Fourier layers each having a frequency and spatial component, and wherein the hyper network generating the main network parameters comprises generating parameters for the Fourier layers.” – The limitation is directed to Fourier layers having a frequency/spatial components, and wherein the hyper network parameters will comprise generating the layers. The limitation is directed to mathematical calculation/use of mathematical concepts, and thus the limitation is directed to math. There are no elements to be evaluated under Step 2A Prong 2 and Step 2B. Thus, claim 2 is non-patent eligible. Regarding claim 3, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites the abstract ideas of: “The method of claim 2, wherein during training of the hyper network, the hyper network modifies the Fourier layers based on a Taylor expansion around a learned configuration to determine updated parameters for the Fourier layers,” – The limitation recites the Fourier layers will be modified during training using Taylor expansion, a known mathematical concept, and thus the limitation is directed to math. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “wherein the updated parameters are changed in both the frequency and spatial component.” – The limitation is directed to updating parameters in both components. The limitation is directed to mere manipulation of data, which is an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of updating parameters is considered electronic recordkeeping, which is a well-understood, routine, and conventional activity (WURC), that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 3 is non-patent eligible. Regarding claim 4, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 1, the method further comprising obtaining a dataset based on experimental or simulation data generated with different parameter configurations, the dataset comprising a plurality of inputs and a plurality of outputs corresponding to the inputs” -- The limitation recites a method of obtaining a dataset based on gather data that was generated with different parameter configurations, and that the dataset comprises a plurality of inputs and outputs. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of obtaining gathered data and transmitting (sending/receiving) data over a network is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). “wherein the hyper network is trained using the dataset” -- The limitation recites training a hyper network using a dataset. The limitation is directed to mere instructions to apply, and thus it does not integrate to a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 4 is non-patent eligible. Regarding claim 5, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites the abstract ideas of: “to determine a simulation result based on the at least one input of the dataset;” – The limitation is directed to determining a result based on at least one input from a dataset. The limitation is directed to a process that can be completed in the human mind using evaluation, observation, and judgement, and thus the limitation is directed to a mental process. “comparing the simulation result against at least one output corresponding to the at least one input from the dataset;” – The limitation is directed to comparing a simulation result against at least an output/input from a dataset. The limitation is directed to a process that can be completed in the human mind using evaluation, observation, and judgement, and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 4, wherein the training comprises: simulating, via the main network generated with the main network parameters, the physical system” – The limitation recites simulating a physical system via a network that was generated with the main network parameters. Using a main network to simulate a physical system is considered mere instructions to apply onto a computer, and thus it does not integrate to a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(f)). “and updating the main network parameters based on the comparison result.” – The limitation is directed to updating network parameters based on gathered data (comparison result). The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of updating parameters based on/using gathered data is considered electronic recordkeeping, which is a well-understood, routine, and convention activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 5 is non-patent eligible. Regarding claim 6, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 5, wherein the training of the hyper network is iteratively conducted until the simulation result is within a predetermined tolerance threshold when compared to the at least one output.” – The limitation recites that training of the hyper network will be done iteratively up until a predetermined threshold has been reached. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of performing repetitive calculations is a well-understood, routine, and conventional activity that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 6 is non-patent eligible. Regarding claim 7, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “receiving the external parameters by the hyper network, wherein the external parameters are system parameters that correspond to the physical system targeted for simulation.” -- The limitation recites receiving external parameters by the hyper network, and that the parameters will correspond to the physical system to be simulated. The limitation is directed to an insignificant, extra-solution, activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of receiving parameters from a network and that they will then be corresponding to other data over the network./system is a well-understood, routine, and conventional activity that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 7 is non-patent eligible. Regarding claim 8, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: “The method of claim 1, wherein the hyper network comprises Fourier layers each having a frequency and spatial component with corresponding hyper network parameters” – The limitation is directed to comprising layers having frequency/spatial components with corresponding parameters. The limitation is directed to mathematical concept/calculations, and thus the limitation is directed to math. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “and wherein the method further comprises receiving the external parameters by the hyper network, wherein the external parameters are system parameters configured to adapt the Fourier layers to the physical system targeted for simulation.” – The limitation recites receiving the external parameters by the network to adapt to the layers. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of obtaining gathered data and transmitting (sending/receiving) data over a network is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 8 is non-patent eligible. Regarding claim 9, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites the abstract ideas of: “wherein the hyper network comprises Fourier layers each having a frequency and spatial component with corresponding hyper network parameters, – The limitation is directed to comprising layers having frequency/spatial components with corresponding parameters. The limitation is directed to mathematical concept/calculations, and thus the limitation is directed to math. “wherein the external parameters are determined by learning a representation of the external parameters according to a bilevel problem.” – The limitation is directed to determining external parameters by learning a representation of parameters from a bilevel problem. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement (with aid of pen and paper), and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “and wherein the method further comprises adapt the Fourier layers to the physical system targeted for simulation based on external parameters” – The limitation recites the hyper network that further comprises adapting layers to the system based on the external parameters. The limitation does not amount to no more than mere limiting to a field of use/environment, and thus it does not integrate to a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 9 is non-patent eligible. Regarding claim 10, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites the abstract ideas of: “The method of claim 1, wherein the hyper network comprises hyper network parameters corresponding to the spatial domain and the frequency domain” – The limitation is directed to comprising layers having frequency/spatial components with corresponding parameters. The limitation is directed to mathematical concept/calculations, and thus the limitation is directed to math. “wherein training the hyper network comprises updating the hyper network parameters using stochastic gradient descent based on a training database comprises input and output pairs until a target loss threshold is reached” -- The limitation is directed to updating parameters using stochastic descent based on a training database that comprises input/output pairs until a threshold is reached. The limitation is directed to the use of a known mathematical concept, stochastic gradient descent, to perform the task, and thus it is directed to math. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “wherein the generating of the main network is performed after completing the training of the hyper network and comprises receiving external parameters associated with the target physical system; and generating the main network parameters based on the hyper network parameters and the external parameters.” -- The limitation recites generating a main network after training and will comprise of receiving data (the parameters) associated with the physical system and its external parameters, and hyper network parameters. The limitation is directed to an insignificant-extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of sending/receiving data over a network is a well-understood, routine, and conventional activity that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 10 is non-patent eligible. Regarding claim 11, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 1, comprising instantiating the main network on a computer system and operating the man network to simulate the target physical system.” – The limitation recites applying the main network onto a computer system and operate it to simulate a physical system. The limitation amounts to no more than mere instructions to apply onto a computer, and it does not integrate to a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 11 is non-patent eligible. Regarding claim 12, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites the abstract ideas of: “determining whether to activate an alarm or hardware control sequence based on the simulation result.” – The limitation is directed to determining whether to activate an alarm or a sequenced based on a result. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement (with aid of pen and paper), and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “receiving input data, simulating the physical system based on the input data to provide a simulation result;” – The limitation is directed to receiving input data and simulating a system based on the data to provide a result. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of sending/receiving data over a network is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 12 is non-patent eligible. Regarding claim 13, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites the abstract ideas of: “The method of claim 1, comprising parameterizing a meta-learning network by modifying only system parameters” – The limitation is directed to parametrizing a network by modifying the external parameters. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement (with aid of pen and paper), and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “wherein the main network based on the main network parameters generated by the hyper network includes fewer parameters than the hyper network.” – The limitation recites that the main network based on parameters generated by another network will include fewer parameters. The limitation does not amount to no more than merely further limiting to a field of use/environment, and thus does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 13 is non-patent eligible. Regarding claim 14, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “A tangible, non-transitory computer-readable medium having instructions thereon which, upon being executed by one or more hardware processors, alone or in combination, provide for execution of the method of claim 1.” -- The limitation recites that instructions from a tangible non-transitory computer-readable medium will execute instructions onto processors to execute the method of claim 1. The limitation does not amount to no more than mere instructions to apply onto a computer, and it does not integrate to a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 14 is non-patent eligible. Regarding claim 16, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 1, wherein the main network comprises a first Fourier layer, a second Fourier layer, a first parameter layer, and a second parameter layer,… and wherein the first Fourier layer and the second Fourier layer are positioned between the first parameter layer and the second parameter layer.” -- The limitation recites that the main network will also comprises a first/second Fourier layer and the same for parameter layer. The limitation amounts to no more than merely further limiting to a field of use/environment, and thus it does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(h)). “wherein the first parameter layer processes an input signal of the main network and the second parameter layer generates an output signal of the main network,” -- The limitation recites that the first parameter layer will process an input signal of the main network, and also a second parameter layer that will generate an output signal of the main network. The limitation is directed to an insignificant, extra-solution activity that does not integrate to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of inputting/outputting signals and outputting signals of a network and generating an output signal is a well-understood, routine, and conventional activity (WURC), and does not provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 16 is non-patent eligible. Regarding claim 17, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 16, wherein the first parameter layer processes the input of the main network by transforming the input signal into the external parameter space based on adding and/or reducing features from the input signal,” -- This recitation recites transforming an input signal into another representational space. Such transformations constitute data manipulation and preprocessing, which are insignificant extra-solution activities and do not integrate to a practical application (see MPEP 2106.05(g)). Under Step 2B, adding or reducing features to form an embedding or transformed representation is well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). “wherein the first Fourier layer processes the transformed input signal that is in the external parameter space.” -- The limitation is directed to the first Fourier layer will process the input signal once it is in external parameter space. The limitation amount to no more than mere instructions to apply onto a computer, and thus does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 17 is non-patent eligible. Regarding claim 18, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 17, wherein the first Fourier layer comprises a spatial layer and a frequency layer, wherein the spatial layer processes the transformed input signal to generate a first output that is in the spatial domain and the frequency layer processes the transformed input signal to generate a second output that is in the frequency domain,” -- The limitation recites that the Fourier layers will comprise of a spatial and frequency layer, and that the layer will process the input signal to generate an output that is in spatial domain, and same for the frequency domain on the second output. The limitation is directed to insignificant extra-solution activities and do not integrate to a practical application (see MPEP 2106.05(g)). Under Step 2B,processing transformed input signals to generate outputs is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). “and wherein generating the main network parameters for the main network comprises processing the external parameters using the trained hyper network to generate a first tensor that parameterizes the spatial layer and a second tensor that parameterizes the frequency layer.” -- The limitation recites a similar limitation as previous claims, where generating the parameters for the main network comprises processing parameters using the trained network, and it is seen as mere instructions to apply. (see MPEP 2106.05(f)). Thus, claim 18 is non-patent eligible. Regarding claim 19, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. Step 2A Prong 1: – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites the abstract ideas of: “wherein the frequency layer operates in a Fourier space and uses a Fast Fourier Transform to generate the second output.” -- The limitation is directed to using The Fast Fourier Transform, which is a well-known mathematical algorithm, to generate another output. The limitation is directed to the use of a mathematical concept/concept, and thus it is directed to math. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 18, wherein the spatial layer comprises a one-dimensional (1-D) convolutional network.” -- The limitation recites a spatial layer will comprise a one-dimensional (1-D) convolutional network. The limitation amounts to no more than mere instructions to apply onto a computer, and it does not integrate to a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 19 is non-patent eligible. Regarding claim 20, Step 1: – Is the claim to a process, machine, manufacture, or composition of matter? Yes, the claim is directed to a method, which is considered to be a process. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: – Does the claim recite additional elements that integrate the judicial exception into a practical application and/or provide significantly more than the judicial exception? No, the claim does not recite additional elements that integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception. The additional elements: “The method of claim 18, wherein the first Fourier layer further comprises a layer combiner that generates and provides a first Fourier layer output to the second Fourier layer, wherein the first Fourier layer output is based on the first output and the second output.” -- The limitation recites that the first Fourier layers will further comprise a “layer combiner” that will generate and provide a layer output to the second Fourier layer, where the output of the first Fourier layer is based on the first/second output. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of generating and providing an output (or input) from one layer to another and where said output is based on past/gathered data is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 20 is non-patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4, 5, 6, 7, 8, 9, 11, 14, 15, 16, 17, 18, 19, 20 are rejected under 35 U.S.C. 103 as being unpatentable over NPL reference “FOURIER NEURAL OPERATOR FOR PARAMETRIC PARTIAL DIFFERENTIAL EQUATIONS, by Li et. al. (referred herein as Li) in view of NPL reference “HYPERNETWORKS”, by Ha et. al. (referred herein as Ha). Regarding claim 1, Li teaches: A computer-implemented method for operating a hyper network machine learning system, the method comprising: training a hyper network configured to generate main network parameters for a main network, ([Li, page 3] “We introduce the Fourier neural operator, a novel deep learning architecture able to learn mappings between infinite-dimensional spaces of functions; the integral operator is restricted to a convolution, and instantiated through a linear transformation in the Fourier domain.”, wherein the examiner interprets “learn mappings between infinite-dimensional spaces of functions” to be the same as training a hyper network configured to generate main network parameters for a main network, because they are both directed to a learned system that maps from input function spaces to outputs that define another function (analogous to the main network behavior), effectively generating the configuration (i.e., parameters) of a downstream computational model.) generating, using the trained hyper network, the main network with the main network parameters ([Li, page 2] “Obtaining a solution for a new instance of the parameter requires only a forward pass of the network, alleviating the major computational issues incurred in Neural-FEM methods”, wherein the examiner interprets “a forward pass of the network” and “shared learned parameters” to be the same as generating, using the trained hyper network, the main network with the main network parameters, because they are both directed to using a previously trained network to produce outputs (i.e., instantiate a network function) specific to a new configuration, based on stored or learned parameters.) the main network having a machine learning architecture that models a spatial domain and a frequency domain ([Li, page 8] “FNO-2D, U-Net, TF-Net, and ResNet all do 2D-convolution in the spatial domain...”, [Li, page 4] “apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F⁻¹. On the bottom: apply a local linear transform W.”, [Li, page 5] “We propose replacing the kernel integral operator in (3), by a convolution operator defined in Fourier space…We therefore propose to directly parameterize κφ in Fourier space”, and [Li, page 3] “We introduce the Fourier neural operator, a novel deep learning architecture able to learn mappings between infinite-dimensional spaces of functions; the integral operator is restricted to a convolution, and instantiated through a linear transformation in the Fourier domain.”, wherein the examiner interprets “2D-convolution in the spatial domain” and “Fourier transform F; a linear transform R on the lower Fourier modes...inverse Fourier transform F⁻¹” to be the same as a machine learning architecture that models a spatial domain and a frequency domain, because they are both directed to neural network layers that operate on spatially structured data using convolution, and also on frequency components using learned spectral filters in the Fourier domain. The examiner further interprets performing a learned transformation on selected frequency components, and reconstructing the signal via an inverse transform all within the network layer structure to be the same as a machine learning architecture that models a frequency domain, because both are directed to modifying and learning in the spectral (frequency) space as a core part of the network function as an embedded computational layer.) to simulate a physical system ([Li, page 6] “we compare the proposed Fourier neural operator with multiple finite-dimensional architectures as well as operator-based approximation methods on the 1-d Burgers’ equation, the 2-d Darcy Flow problem, and 2-d Navier-Stokes equation”, wherein the examiner interprets “Fourier neural operator with ... Burgers’ equation, Darcy Flow ... and Navier-Stokes equation” to be the same as simulate a physical system, because they are both directed to generating predictions of real-world physical phenomena modeled by partial differential equations, such as fluid dynamics and subsurface flow, using machine-learned architectures.) Li does not teach wherein training the hyper network comprises learning an external parameter space. and wherein generating the main network parameters is based on inputting external parameters from the learned external parameter space into the trained hyper network;. Ha teaches: wherein training the hyper network comprises learning an external parameter space; ([Ha, page 1] “where the input is an embedding vector that describes the entire weights of a given layer. Our embedding vectors can be fixed parameters that are also learned during end-to-end training, allowing approximate weight-sharing within a layer and across layers of the main network”, and [Ha, page 3] “During inference, the model simply takes the layer embeddings z j learned during training to reproduce the kernel weights for layer j in the main convolutional network.”, wherein the examiner interprets “takes the layer embeddings z j learned during training to reproduce the kernel weights for layer j in the main convolutional network.” to be the same as training a [hyper]network will comprise learning an external parameter space, because they are both directed to train a network by learning an external parameter space (layer embeddings) that will allow for external weight sharing across layers.) and wherein generating the main network parameters is based on inputting external parameters from the learned external parameter space into the trained hyper network;. ([Ha, page 4] “At every time step t, a HyperRNN takes as input the concatenated vector of input xt and the hidden states of the mainRNN ht-1, it then generates as output the vector hˆt. This vector is then used to generate the weights for the main RNN at the same timestep. Both the HyperRNN and the main RNN are trained jointly with backpropagation and gradient descent” AND [Ha, page 6] “ PNG media_image1.png 133 631 media_image1.png Greyscale ”, wherein the examiner interprets “takes as input the concatenated vector of input xt and the hidden states of the main RNN ht-1, it then generates as output the vector hˆt.” the input-conditioned latent variable z to be the same as another form of external parameters, and W(z) to be the same as the generated main network parameters, because both describe weight generation based on external conditioning input.) Li, Ha, and the instant application are analogous art because they are all directed to a method for generating parameters of a downstream neural network architecture using a parameter-generating model. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of generating network parameters and simulating a physical system disclosed by Li to include the “HyperNetworks are trained to generate the weights of another network… by mapping an input z into the parameter space of a target network” disclosed by Ha. One would be motivated to do so to effectively improve the adaptability of Li’s parameter-generation process by enabling conditioning on external variables, as suggested by Ha ([Ha, page 3] “The HyperNetwork learns a continuous representation…enabling smooth variation in the generated weights.”). Claim 14 and 15 are analogous to claim 1, claim 14 is merely applying the method of claim 1, executing it onto hardware processors, and claim 15 recites analogous limitations recited in claim 1. Therefore, both claims would face the same rejection. Regarding claim 2, Li and Ha teaches The method of claim 1, (see rejection of claim 1). Li further teaches wherein the main network has a Fourier neural operator architecture comprising a plurality of Fourier layers each having a frequency and spatial component, ([Li, page 6] “We construct our Fourier neural operator by stacking four Fourier integral operator layers as specified in (2) and (4) with the ReLU activation as well as batch normalization.” AND [Li, page 4] “apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F⁻¹. On the bottom: apply a local linear transform W.”, wherein the examiner interprets “stacking four Fourier integral operator layers” and “Fourier transform F ... inverse Fourier transform F⁻¹ ... local linear transform W” to be the same as a Fourier neural operator architecture comprising a plurality of Fourier layers each having a frequency and spatial component, because they are both directed to a neural architecture composed of multiple layers that process input signals in the frequency domain using the Fourier transform and in the spatial domain using local linear operations.) wherein the hyper network generating the main network parameters comprises generating parameters for the Fourier layers ([Li, p. 6] “In general, R can be defined to depend on (Fa) to parallel (3). Indeed, we can define Rφ : Z^d × R^dv → R^dv×dv as a parametric function that maps (k, (Fa)(k)) to the values of the appropriate Fourier modes.” AND [Li, page 9] “The neural operator has an iterative structure that can naturally be formulated as a recurrent network where all layers share the same parameters without sacrificing performance.”, wherein the examiner interprets “R can be defined to depend on (Fa)” and “a parametric function that maps (k, (Fa)(k)) to the values of the appropriate Fourier mode” to be the same as generating parameters for the Fourier layers, because they are both directed to computing layer-specific spectral weights as a function of system-related inputs (via Fourier coefficients), and assigning those weights to the Fourier modes in each layer. The examiner further interprets “all layers share the same parameters” to reinforce that a single trained module (analogous to a hypernetwork) can provide parameters across the layers, which aligns with the concept of a hyper network generating the main network parameters.) Regarding claim 4, Li and Ha teaches The method of claim 1, (see rejection of claim 1). Li further teaches the method further comprising obtaining a dataset based on experimental or simulation data generated with different parameter configurations, ([Li, page 3] “Suppose we have observations {aj , uj} N j=1 where aj ∼ µ is an i.i.d. sequence from the probability measure µ supported on A and uj = G+ (aj) is possibly corrupted with noise. We aim to build an approximation of G+ by constructing a parametric map”, wherein the examiner interprets “observations {aj , uj}... where aj is… from a probability measure µ…and uj = G+ (aj)” to be the same as generating a dataset by running simulations with varying parameters, because they are both describing the generation of data pairs by evaluating a parameterized model (e.g., PDE solver) under different input conditions.) the dataset comprising a plurality of inputs and a plurality of outputs corresponding to the inputs, wherein the hyper network is trained using the dataset. ([Li, page 4], “Let Dj = {x1, . . . , xn} ⊂ D be a n-point discretization of the domain D and assume we have observations aj |Dj ∈ R n×da , uj |Dj ∈ R n×dv, for a finite collection of input-output pairs indexed by j.”, AND [Li, page 3,6] “We aim to build an approximation of G+ by constructing a parametric map …We construct our Fourier neural operator by stacking four Fourier integral operator layers as specified in (2) and (4) with the ReLU activation as well as batch normalization. Unless otherwise specified, we use N = 1000 training instances and 200 testing instances. We use Adam optimizer to train for 500 epochs..”, wherein the examiner interprets “for a finite collection of input-output pairs” to be the same as “a plurality of inputs and a plurality of outputs corresponding to the inputs” because they are both referring to paired samples used to train the network. The examiner further interprets “We aim to build an approximation of G† by constructing a parametric map…We construct our Fourier neural operator by stacking…and training… use Adam optimizer” to be the same as training a hypernetwork using the dataset, because they both describe using the dataset to train a neural operator model that generalizes across parameterized inputs.) Regarding claim 5, Li and Ha teaches The method of claim 4, (see rejection of claim 4). Li further teaches wherein the training comprises: simulating, via the main network generated with the main network parameters, the physical system to determine a simulation result based on the at least one input of the dataset; comparing the simulation result against at least one output corresponding to the at least one input from the dataset; and updating the main network parameters based on the comparison result; ([Li, page 3] ”We aim to build an approximation of G† by constructing a parametric map G : A×Θ→U or equivalently, Gθ : A → U, θ ∈ Θ … so that G(·, θ†) ≈ G†…we seek a minimizer of the problem minθ∈Θ Ea∼µ[C(G(a, θ), G†(a))],”, AND [Li, page 5] “Then we apply several iterations of updates vt 7→ vt+1 (defined below). The output u(x) = Q(vT (x)) is the projection of vT by the local transformation Q : R dv → R du . In each iteration, the update vt 7→ vt+1 is defined as the composition of a non-local integral operator K and a local, nonlinear activation function σ”, wherein the examiner interprets “constructing a parametric map Gθ to approximate G†, and minimizing the cost between G(a, θ) and G†(a)” to be the same as simulating the physical system using the main network (Gθ), comparing the simulation result to ground truth outputs (G†(a)), and the “iterations of updates vt 7→ vt+1 (defined below). The output u(x) = Q(vT (x)) is the projection of vT by the local transformation Q : R dv → R du . In each iteration, the update vt 7→ vt+1 is defined as the composition of a non-local integral operator K and a local, nonlinear activation function σ” to be the same as updating the main network parameters based on the comparison result, because both describe a training process where a neural network is evaluated on a dataset, compared to expected outputs, and refined via optimization.) Regarding claim 6, Li and Ha teaches The method of claim 5, (see rejection of claim 5). Li further teaches wherein the training of the hyper network is iteratively conducted until the simulation result is within a predetermined tolerance threshold when compared to the at least one output. ([Li, page 3], “The proposed method consistently outperforms all existing deep learning methods even when fixing the resolution to be 64×64. It achieves error rates that are 30% lower on Burgers’ Equation, 60% lower on Darcy Flow, and 30% lower on Navier Stokes (turbulent regime with viscosity ν = 1e−4). When learning the mapping for the entire time series, the method achieves < 1% error with viscosity ν = 1e−3 and 8% error with viscosity ν = 1e−4. …We will approach this problem in the test-train setting by using a data-driven empirical approximation to the cost used to determine θ and to test the accuracy of the approximation.”, wherein the examiner interprets “achieves < 1% error with viscosity ν = 1e−3 and 8% error with viscosity ν = 1e−4” and “test the accuracy of the approximation” to be the same as the simulation result is within a predetermined tolerance threshold when compared to the at least one output, because they are both directed to comparing the output of the trained model to reference data and continuing training until the prediction error falls within an acceptable range of accuracy.) Regarding claim 7, Li and Ha teach the method of claim 1, (see rejection of claim 1). Li further teaches the method further comprises receiving external parameters by the hyper network, wherein the external parameters are system parameters that correspond to the physical system targeted for simulation, wherein generating the main network with the main network parameters comprises the hyper network generating the main network parameters based on hyper network parameters and the external parameters. ([Li, page 6] “In general, R can be defined to depend on (F a) to parallel (3). Indeed, we can define Rφ : Z^d × R^dv → R^(dv×dv) as a parametric function that maps k, (F a)(k) to the values of the appropriate Fourier modes.”, wherein the examiner interprets Li’s disclosure that the neural map Rφ combines its learned weights φ with the task-specific coefficients (F a)(k), which correspond to descriptors of the target physical system, to output the tensors used in each Fourier layer, to be the same as the hyper network generating the main network parameters from both the hyper network parameters and the received external parameters, because both describe producing weights of a downstream simulation network by conditioning on learned network parameters together with parameters representing the physical system being simulated.) Regarding claim 8, Li and Ha teach the method of claim 1, (see rejection of claim 1). Li further teaches wherein the hyper network comprises Fourier layers each having a frequency and spatial component with corresponding hyper network parameters, and wherein the method further comprises receiving external parameters by the hyper network, the external parameters being configured to adapt the Fourier layers to the physical system targeted for simulation. ([Li, page 4] “apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F-1. On the bottom: apply a local linear transform W.” AND [Li, page 6] “R can be defined to depend on (F a). Indeed, we can define Rφ…as a parametric function that maps k,(F a)(k) to the values of the appropriate Fourier modes.”, wherein the examiner interprets the Fourier transform and inverse Fourier transform with R and W to be the same as Fourier layers each having frequency and spatial components with hyper network parameters, and interprets Rφ depending on (F a)(k) to be the same as receiving external parameters configured to adapt the Fourier layers to the physical system, because both describe a structure where Fourier-domain and spatial-domain transforms are parameterized and modulated using data describing the target physical system.) Regarding claim 9, Li and Ha teach the method of claim 1, (see rejection of claim 1). Li further teaches: wherein the hyper network comprises Fourier layers each having a frequency and spatial component with corresponding hyper network parameters, ([Li, page 4] “On top: apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F^(−1)”, wherein the examiner interprets “apply the Fourier transform F…linear transform R…inverse Fourier transform F-1” together with the spatial-domain transform W to be the same as Fourier layers each having a frequency component and a spatial component with corresponding hyper network parameters, because both describe layers that operate across these two domains using distinct trainable transformations.) and wherein the method further comprises adapt the Fourier layers to the physical system targeted for simulation based on the external parameters, wherein the external parameters are determined by learning a representation of the external parameters according to a bilevel problem. ([Li, page 6] “Parameterizations of R. In general, R can be defined to depend on (Fa) to parallel (3). Indeed, we can define Rφ : Zd × R^dv → R^(dv×dv) as a parametric function that maps k,(Fa)(k)) to the values of the appropriate Fourier modes.”, wherein the examiner interprets “R…defined to depend on (Fa)…maps (k,(Fa)(k)) to the values of the appropriate Fourier modes” to be the same as adapting the Fourier layers to the physical system targeted for simulation based on the external parameters that are learned through a bilevel outer-/inner optimization because they are both directed to using a learned representation of external parameters at an outer (upper) level to condition or adapt the Fourier-layer weights that are trained at an inner (lower) level, thereby implementing the cited bilevel approach). Regarding claim 11, Li and Ha teach the method of claim 1, (see rejection of claim 1). Li further teaches comprising instantiating the main network on a computer system and operating the man network to simulate the target physical system. ([Li, page 2] “Obtaining a solution for a new instance of the parameter requires only a forward pass of the network, alleviating the major computational issues incurred in Neural-FEM methods”, wherein the examiner interprets “a forward pass of the network” to be the same as operating the main network, because they are both directed to applying the trained network to generate a simulation output given an input, and “solution for a new instance of the parameter” to be the same as simulating the target physical system, because they both describe using the network to model a new physical scenario of the system of interest AND ([Li, page 9] “Once trained, FNO can be used to quickly perform multiple MCMC runs for different initial conditions and observations”, wherein the examiner interprets “used to quickly perform multiple MCMC runs” to be the same as operating the main network to simulate the target physical system, because they both refer to executing the trained network to simulate or estimate outcomes under new conditions on a computational system) Regarding claim 16, Li and Ha teaches The method of claim 1, (see rejection of claim 1). Li further teaches: wherein the main network comprises a first Fourier layer and a second Fourier layer, ([Li, page 5] “We construct our Fourier neural operator by stacking four Fourier integral operator layers as specified in (2) and (4) with the ReLU activation as well as batch normalization.”, wherein the examiner interprets “stacking four Fourier integral operator layers” to be the same as the main network comprising a first Fourier layer and a second Fourier layer, because both are directed to a neural architecture in which multiple Fourier-based transformation layers are arranged sequentially to process the internal representation.) the method further comprises a first parameter layer that processes an input signal of the main network and a second parameter layer that generates an output signal of the main network, ([Li, page 3] “the input a ∈ A is first lifted to a higher dimensional representation v₀(x) = P(a(x)) by the local transformation P which is usually parameterized by a shallow fully-connected neural network...The output u(x) = Q(v_T(x)) is the projection of v_T by the local transformation Q : R^{d_v} → R.”, wherein the examiner interprets the local transformation P to be the same as the first parameter layer that processes an input signal, and the local transformation Q to be the same as the second parameter layer that generates an output signal, because they are both directed to learned linear transformations that (1) lift or encode an input signal into an internal feature representation, and (2) project or decode the final internal representation into the network’s output.) and wherein the first Fourier layer and the second Fourier layer are positioned between the first parameter layer and the second parameter layer. ([Li, page 3] “the input a ∈ A is first lifted…” AND [Li, page 6] “We construct our Fourier neural operator by stacking four Fourier integral operator layers as specified in (2) and (4) with the ReLU activation as well as batch normalization.” AND [Li, page 3] "We introduce the Fourier neural operator, a novel deep learning architecture able to learn mappings between infinite-dimensional spaces of functions; the integral operator is restricted to a convolution, and instantiated through a linear transformation in the Fourier domain.”, wherein the examiner interprets "Fourier neural operator, a novel deep learning architecture able to learn mappings between infinite-dimensional spaces of functions" to be the same as positioning the first Fourier layer and the second Fourier layer between the first parameter layer and the second parameter layer, because both describe a neural architecture in which an initial parameterized transformation of the input is followed by a sequence of Fourier layers and subsequently a parameterized output transformation.) Li, Ha, and the instant application are analogous art because they are all directed to a method for generating and applying parameterized neural network components that transform an input signal through an initial learned layer, one or more intermediate operator layers, and a final learned output layer to produce a modeled result. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method claim 1 disclosed by Li and Ha to include the “We construct our Fourier neural operator by stacking four Fourier integral operator layers as specified in (2) and (4) with the ReLU activation as well as batch normalization.” disclosed by Li. One would be motivated to do so to effectively incorporate multiple Fourier-based transformation layers within the architecture so as to improve the expressiveness and representational capacity of the main network, as suggested by Li ([Li, page 5], “We construct our Fourier neural operator by stacking four Fourier integral operator layers…”). Regarding claim 17, Li and Ha teaches The method of claim 16, (see rejection of claim 16). Li further teaches: wherein the first parameter layer processes an input signal of the main network by transforming the input signal into an external parameter space based on adding and/or reducing features from the input signal, ([Li, page 3] “the input a ∈ A is first lifted to a higher dimensional representation v₀(x) = P(a(x)) by the local transformation P which is usually parameterized by a shallow fully-connected neural network”, wherein the examiner interprets “lifted to a higher dimensional representation” to be the same as transforming the input signal based on adding or reducing features, because both describe generating a revised feature basis that differs from the original input, thereby forming an external parameter space that serves as a learned representation for downstream network computation.) and wherein the first Fourier layer processes the transformed input signal that is in the external parameter space. ([Li, page 3] "The Fourier neural operator is the first work that learns...By construction, the method shares the same learned network parameters irrespective of the discretization used on the input and output spaces" AND [Li, page 4] "start from input a. 1…apply the Fourier transform F; a linear transform R on the lower Fourier modes… then apply the inverse Fourier transform F⁻¹. On the bottom: apply a local linear transform W...”, wherein the examiner interprets " input and output spaces...apply the Fourier transform F; a linear transform R on the lower Fourier modes… then apply the inverse Fourier transform F⁻¹" to be the same as the first Fourier layer processing the transformed input signal, because both describe a Fourier-based operator that receives the previously transformed representation and applies frequency-domain and spatial-domain operations to it, thereby operating on the signal that resides in the external parameter space established by the initial feature-transformation layer.) Regarding claim 18, Li and Ha teach The method of claim 17 (see rejection of claim 17). Li further teaches: wherein the first Fourier layer comprises a spatial layer and a frequency layer, ([Li, page 4] “apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F⁻¹. On the bottom: apply a local linear transform W.”, wherein the examiner interprets the sequence “Fourier transform F… linear transform R… inverse Fourier transform F⁻¹” to be the same as a frequency layer, and the “local linear transform W” to be the same as a spatial layer, because both describe separate computational branches within the Fourier layer that operate respectively in the frequency domain and in the spatial domain.) wherein the spatial layer processes the transformed input signal to generate a first output that is in the spatial domain and the frequency layer processes the transformed input signal to generate a second output that is in the frequency domain, ([Li, page 3] “the input a ∈ A is first lifted… Then we apply several iterations of updates v_t”, and [Li, page 4] “apply the Fourier transform F… then apply the inverse Fourier transform F⁻¹. On the bottom: apply a local linear transform W.”, wherein the examiner interprets the inverse Fourier transform F⁻¹ applied after R(F(v)) to be the frequency-domain output converted back to contribute to the layer update, and the local linear transform W applied directly to v to be the spatial-domain output, because both operations receive the same transformed input v₀(x) and separately produce outputs in the frequency and spatial domains respectively.) Li does not teach and wherein generating the main network parameters for the main network comprises processing the external parameters using the trained hyper network to generate a first tensor that parameterizes the spatial layer and a second tensor that parameterizes the frequency layer. Ha further teaches and wherein generating the main network parameters for the main network comprises processing the external parameters using the trained hyper network to generate a first tensor that parameterizes the spatial layer and a second tensor that parameterizes the frequency layer. ([Ha, page 1] “the input is an embedding vector that describes the entire weights of a given layer” and [Ha, page 5] “Our approach borrows from the static hypernetwork section and we will use an intermediate hidden vector d(z) ∈ R Nh to parametrize a weight matrix, where d(z) will be a linear projection of z.”, wherein the examiner interprets “embedding vector that describes the entire weights of a given layer” and “hypernetwork section and we will use an intermediate hidden vector d(z) ∈ R Nh to parametrize a weight matrix” to be the same as processing external parameters with a trained hypernetwork to generate parameter tensors, because both describe a learned conditioning mechanism in which the hypernetwork outputs the weight tensors needed to parameterize separate computational components of a downstream network.) Li, Ha, and the instant application are analogous art because they are all directed to neural network architectures that generate parameterized representations of an input signal, apply operator-based layers such as Fourier layers to transformed feature representations, and utilize learned parameters to govern the behavior of spatial and frequency-domain computations within the network. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 1 disclosed by Li and Ha to include the “static hypernetwork section and we will use an intermediate hidden vector d(z) ∈ R Nh to parametrize a weight matrix,” disclosed by Ha. One would be motivated to do so to effectively parametrize and processing of the external parameters with a trained hypernetwork to generate parameter tensors, as suggested by Ha ([Ha, page 5] “Our approach borrows from the static hypernetwork section and we will use an intermediate hidden vector d(z) ∈ R Nh to parametrize a weight matrix, where d(z) will be a linear projection of z.”) Regarding claim 19, Li and Ha teach The method of claim 18 (see rejection of claim 18). Li further teaches: wherein the spatial layer comprises a one-dimensional (1-D) convolutional network (Li, page 6] “We set kmax,j = 16, dv = 64 for the 1-d problem” AND [Li, page 3] “integral operator is restricted to a convolution, and instantiated through a linear transformation in the Fourier domain...”, wherein the examiner interprets setting the kmax, j value for the i-d input and "the integral operator is restricted to a convolution, and instantiated through a linear transformation in the Fourier domain” to be the same as the spatial layer comprising a one-dimensional convolutional network, because both describe applying a learned spatial-domain transformation to a one-dimensional signal.) wherein the frequency layer operates in a Fourier space and uses a Fast Fourier Transform to generate the second output. ([Li, page 4] “apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F⁻¹.” AND [Li, page 3] “we exploit this fact by parameterizing κφ directly in Fourier space and using the Fast Fourier Transform to efficiently compute (3).” wherein the examiner interprets the combined disclosures of applying F, applying R to the Fourier modes, applying F⁻¹, and “using the Fast Fourier Transform” to be the same as the frequency layer operating in a Fourier space and using a Fast Fourier Transform to generate the second output, because both describe computing the Fourier-domain transformation of the signal using FFT-based spectral operations.) Li, Ha, and the instant application are analogous art because they are all directed to neural architectures that process physical-system input signals using learned spatial-domain transformations, learned Fourier-domain transformations, and parameter tensors produced by a hypernetwork to govern these layers. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 18 disclosed by Li and Ha to include the “using the Fast Fourier Transform to efficiently compute (3)” disclosed by Li. One would be motivated to do so to effectively generate Fourier-space outputs with improved computational efficiency and spectral resolution, as suggested by Li ([Li, page 3], “using the Fast Fourier Transform to efficiently compute (3)”.) Regarding claim 20, Li and Ha teach The method of claim 18 (see rejection of claim 18). Li further teaches: wherein the first Fourier layer further comprises a layer combiner that generates and provides a first Fourier layer output to the second Fourier layer, (([Li, page 3] “the input a ∈ A is first lifted to a higher dimensional representation v₀(x) = P(a(x))… Then we apply several iterations of updates vₜ.”, [Li, page 6] We construct our Fourier neural operator by stacking four Fourier integral operator layers as specified in (2) and (4) with the ReLU activation as well as batch normalization. Unless otherwise specified, we use N = 1000 training instances and 200 testing instances”, wherein the examiner interprets “stacking four Fourier integral operator layers” and the Fourier layer stacking to be the same as the first Fourier layer comprising a layer combiner that generates and provides a first Fourier layer output to the second Fourier layer, because both describe producing an intermediate transformed output v₁ that becomes the direct input to the next Fourier layer in the sequence. In operator-learning architectures, the summation W v_t(x) + κφ(v_t)(x) is the layer-combined output of the Fourier layer, which is then propagated to the next Fourier layer.) wherein the first Fourier layer output is based on the first output and the second output. ([Li, page 4] “Fourier layers: Start from input v. On top: apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F-1 . On the bottom: apply a local linear transform W… The output u(x) = Q(vT (x)) is the projection of vT by the local transformation Q : R dv → R du . In each iteration, the update vt 7→ vt+1 is defined as the composition of a non-local integral operator K and a local, nonlinear activation function σ.”, [Li, page 6] “Furthermore, our architecture has a consistent error at any resolution of the inputs and outputs.”, wherein the examiner interprets “from input v. On top: apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes;… output u(x) = Q(vT (x)) is the projection of vT by the local transformation Q : R dv → R du . In each iteration, the update vt 7→ vt+1 is defined as the composition of a non-local integral operator” to be the same as the Fourier layer output to be based on a first/second output, because they are both directed to transforming to the output acceptable for the Fourier layers and operators to be used for computation.) Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Li in view of Ha further in view of NPL reference “Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks” by Balduzzi et. al. (referred herein as Balduzzi). Regarding claim 3, Li and Ha teaches The method of claim 2 (see rejection of claim 2).Li further teaches: wherein during training of the hyper network, the hyper network modifies the Fourier layers ... wherein the updated parameters are changed in both the frequency and spatial component; ([Li, page 4] “apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F⁻¹. On the bottom: apply a local linear transform W.”, wherein the examiner interprets “linear transform R on the lower Fourier modes” and “local linear transform W” to be the same as the Fourier layers having parameters in both the frequency and spatial component, because they are both directed to applying learned transformations to Fourier coefficients in the frequency domain and localized pointwise operations in the spatial domain. AND ([Li, page 6] “R can be defined to depend on (Fa) to parallel (3)... a parametric function that maps k,(Fa)(k) to the values of the appropriate Fourier mode.”, wherein the examiner interprets “a parametric function that maps to the values of the appropriate Fourier mode” to be the same as the hyper network modifying the Fourier layers, because they are both directed to generating or updating Fourier-layer parameters during training based on input-derived features.) Li and Ha do not teach based on a Taylor expansion around a learned configuration to determine updated parameters for the Fourier layers. Balduzzi teaches based on a Taylor expansion around a learned configuration to determine updated parameters for the Fourier layers; ([Balduzzi, Abstract] “The key technical tool is the neural Taylor approximation – a straightforward application of Taylor expansions to neural networks and the associated Taylor loss.”, wherein the examiner interprets “neural Taylor approximation – a straightforward application of Taylor expansions to neural networks” to be the same as a Taylor expansion around a learned configuration, because they are both directed to estimating changes in network parameters by linearizing the network output function around a current set of learned weights, which serves as a reference point for determining updated parameters.) Li, Ha, Balduzzi, and the instant application are analogous art because they are all directed to training neural networks using parameter update schemes during optimization of layered network architectures. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 2 disclosed by Li and Ha to include the “neural Taylor approximation – a straightforward application of Taylor expansions to neural networks” disclosed by Balduzzi. One would be motivated to do so to effectively improve the accuracy and stability of parameter updates during training by using gradient-based approximations that are sensitive to local curvature, as suggested by Balduzzi ([Balduzzi, Abstract] ”a straightforward application of Taylor expansions to neural networks and the associated Taylor loss”). Claims 10 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Li in view of NPL reference “Neural Fields in Visual Computing and Beyond” by Xie et. al. (referred herein as Xie). Regarding claim 10, Li and Ha teaches The method of claim 1 (see rejection of claim 1) Li further teaches parameters corresponding to the spatial domain and the frequency domain, ([Li, p. 4] “apply the Fourier transform F; a linear transform R on the lower Fourier modes and filters out the higher modes; then apply the inverse Fourier transform F⁻¹. On the bottom: apply a local linear transform W.” ,wherein the examiner interprets “linear transform R on the lower Fourier modes” and “local linear transform W” to be the same as hyper network parameters corresponding to the spatial domain and the frequency domain, because they are both directed to parameters that operate across spectral and spatial representations within the network layers.) Li, Ha, and the instant application are analogous art because they are all directed to a method for generating parameters of a downstream neural network architecture using a parameter-generating model. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method claim 1 disclosed by Li to include the “apply the Fourier transform F; a linear transform R on the lower Fourier modes… then apply the inverse Fourier transform F-1” disclosed by Li. One would be motivated to do so to effectively incorporate parameters corresponding to the spatial and frequency domains into the hypernetwork architecture to improve representational capability and adaptation to physical systems, as suggested by Li ([Li, p. 4], “apply the Fourier transform F; a linear transform R on the lower Fourier modes… then apply the inverse Fourier transform F1”) Li and Ha do not teach wherein the hyper network comprises hyper network...wherein training the hyper network comprises updating the hyper network parameters using stochastic gradient descent based on a training database comprises input and output pairs until a target loss threshold is reached, and wherein the generating of the main network is performed after completing the training of the hyper network and comprises receiving the external parameters associated with the target physical system; and generating the main network parameters based on the hyper network parameters and the external parameters. Xie teaches wherein the hyper network comprises hyper network...wherein training the hyper network comprises updating the hyper network parameters using stochastic gradient descent based on a training database comprises input and output pairs until a target loss threshold is reached, and wherein the generating of the main network is performed after completing the training of the hyper network and comprises receiving the external parameters associated with the target physical system; and generating the main network parameters based on the hyper network parameters and the external parameters. ([Xie, page 6] “To FiLM-condition a neural field Φ, we use a network Ψ to predict a per-layer (and potentially per-neuron) scale γ and bias β vector from latent variables z: Ψ(z) = {γ, β}”, wherein the examiner interprets “predict a per-layer (and potentially per-neuron) scale γ and bias β vector” to be the same as hyper network parameters corresponding to the spatial domain and the frequency domain, because they are both directed to parameters that modulate downstream network layers in a structured manner that spans across spatial and frequency representations. AND [Xie, page 5] “defining a function Ψ that maps latent variables z to a subset of neural network parameters Θ = Ψ(z) that then parameterize the neural field ΦΘ”, wherein the examiner interprets “Ψ maps latent variables z to neural network parameters Θ” to be the same as generating the main network parameters based on the hyper network parameters and the external parameters, because they are both directed to producing weights for the main network using a trained hypernetwork conditioned on external or system-specific variables. AND [Xie, page 4] “The latent code z = argminz L(z, Θ) is obtained by minimizing some loss function L, which may be an expectation over a dataset.”, wherein the examiner interprets “minimizing some loss function L over a dataset” to be the same as training the hyper network using stochastic gradient descent based on a training database comprising input and output pairs until a target loss threshold is reached, because they are both directed to iterative training of the hypernetwork using paired training data and an optimization objective until a convergence or stopping condition is met.) Li, Ha, Xie, and the instant application are analogous art because they are all directed to neural network-based simulation frameworks that involve parameterized architectures operating in both spatial and frequency domains. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 1 disclosed by Li and Ha to include the “Ψ maps latent variables z to neural network parameters Θ that then parameterize the neural field ΦΘ” disclosed by Xie. One would be motivated to do so to effectively allow the system to generalize across multiple tasks or physical systems by conditioning the generation of the main network’s weights on varying parameters, as suggested by Xie (Xie, [page 5] “Ψ maps latent variables z to a subset of neural network parameters Θ = Ψ(z) that then parameterize the neural field ΦΘ.”) Regarding claim 12, Li and Ha teaches The method of claim 11 (see rejection of claim 11). Li and Ha do not teach comprising: receiving input data, simulating the physical system based on the input data to provide a simulation result; and determining whether to activate an alarm or hardware control sequence based on the simulation result. Xie teaches comprising: receiving input data, simulating the physical system based on the input data to provide a simulation result; and determining whether to activate an alarm or hardware control sequence based on the simulation result. ([Xi, page 3] “A typical neural fields algorithm in visual computing proceeds as follows (Figure 3): Across space-time, we sample coordinates and feed them into a neural network to produce field quantities. The field quantities are samples from the desired reconstruction domain of our problem. Then, we apply a forward map to relate the reconstruction to the sensor domain”, wherein the examiner interprets “sampling coordinates and feeding into a neural network to produce field quantities” to be the same as receiving input data and simulating the physical system to provide a simulation result, because they are both directed to producing outputs of a modeled physical system using a neural network based on coordinate-based inputs representing system state or configuration.” AND [Xie, page 19] “Control can be achieved either by relying on a planner or directly from observations. Neural fields have been used for this task by learning an obstacle barrier function approximated by an SDF [LQCA20]. Similarly, Bhardwaj et al. [BSM∗21a] solve the robot arm self-collision avoidance task by using neural fields to predict the closest distance between robot links, given its joint configuration”, wherein the examiner interprets “predicting the closest distance between robot links” and “control can be achieved from observations” to be the same as determining whether to activate an alarm or hardware control sequence based on the simulation result, because they are both directed to using the neural network's simulation output (e.g., predicted distance or field values) to initiate downstream control logic for safety, obstacle avoidance, or activation of a physical system component. ) Li, Ha, Xie, and the instant application are analogous art because they are all directed to using neural network-based systems to simulate physical environments and generate control decisions based on those simulations. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 11 disclosed by Li and Ha to include the “neural fields ... used for this task by learning an obstacle barrier function ... to predict the closest distance between robot links” disclosed by Xie. One would be motivated to do so to effectively enable downstream safety-critical decisions and control responses based on neural simulations, as suggested by Xie ([Xie, page 19] “Control can be achieved either by relying on a planner or directly from observations”). Regarding claim 13, Li and Ha teaches The method of claim 1 (see rejection of claim 1). Li and Ha do not teach comprising parameterizing a meta-learning network by modifying only the external parameters, wherein the main network based on the main network parameters generated by the hyper network includes fewer parameters than the hyper network. Xie teaches comprising parameterizing a meta-learning network by modifying only the external parameters, wherein the main network based on the main network parameters generated by the hyper network includes fewer parameters than the hyper network. ([Xie, page 6] “An alternative to the conditional neural field approach is gradient based meta-learning [FAL17]. Here, all neural fields in our target distribution are viewed as specializations of an underlying meta-network with parameters θ [SCT∗20, TMW∗21]. Individual instances are obtained from fitting this meta-network to a set of observations O, minimizing a reconstruction loss L in a small number of gradient descent steps”, wherein the examiner interprets viewing all neural fields as specializations of a meta-network with parameters θ that are adapted via gradient steps to be the same as parameterizing a meta-learning network by modifying only the external parameters because both describe a setup where system-level parameters (θ) are learned and adjusted to generalize across tasks, consistent with meta-learning AND [Xie, page 5] “Different methods of conditioning differ in which parameters Θ are output by Ψ, as well as the form of Ψ itself.”, wherein the examiner interprets the statement that the hypernetwork Ψ outputs a subset of parameters Θ to define the main network, and that Ψ has design choices impacting parameter count, to be the same as the main network including fewer parameters than the hyper network because both describe a structure in which the main network is generated by a larger hypernetwork and contains a reduced subset of parameters.) Li, Ha, Xie, and the instant application are analogous art because they are all directed to methods for optimizing neural network architectures by leveraging system-level training parameters that generalize across tasks and reduce network complexity. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method of claim 1 disclosed by Li and Ha to include the “individual instances are obtained from fitting this meta-network to a set of observations … minimizing a reconstruction loss” disclosed by Xie. One would be motivated to do so to efficiently enable meta-learning generalization across parameterized systems while reducing the computational footprint of the generated model, as suggested by Xie ([Xie, page 5] “These design choices may impact generalization ability, parameter count, and computational cost.”) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVAN KAPOOR whose telephone number is (703)756-1434. The examiner can normally be reached Monday - Friday: 9:00AM - 5:00 PM EST (times may vary). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVAN KAPOOR/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Oct 20, 2022
Application Filed
Jul 28, 2025
Non-Final Rejection — §101, §103
Oct 24, 2025
Response Filed
Dec 18, 2025
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
11%
Grant Probability
28%
With Interview (+16.7%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month