Prosecution Insights
Last updated: April 19, 2026
Application No. 17/567,275

Simulation Warmup

Non-Final OA §101§103
Filed
Feb 09, 2022
Examiner
DARWISH, AMIR ELSAYED
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Passivelogic Inc.
OA Round
3 (Non-Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
3 granted / 5 resolved
+5.0% vs TC avg
Strong +67% interview lift
Without
With
+66.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
37 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
34.9%
-5.1% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1, 3-8, 10-19, and 21-23 are presented for examination. Claims 1, 10, 18 and 19 have been amended. This office action is in response to the amendment submitted on 06-Jan-2026. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments – 35 USC 101 On pgs. 7-11 of the Applicant/Arguments Remarks dated 10/01/2025 (hereinafter ‘Remarks’), Applicant argues the amended claims have overcome the rejection under 35 USC 101. On pg. 7 the applicant argues the invention amounts to significantly more than the judicial exception. The examiner respectfully disagrees. For example claim 1 as stated recites nothing more than applying the exception on a generic computer, which amounts to merely ‘applying it’. It amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f) On Pg. 8, The applicant further argues the invention cannot be performed in the human mind. The examiner disagrees. The claim limitations flagged as mental are perfectly capable of being performed in the human mind with the aid of a pen and paper. There is nothing in these claim limitations under BRI and in light of the specification that precludes the human mind from being able to perform them. The human mind is capable of determining initial values for the simulator and reversing a time series of inputs/outputs. Nothing in the claim language requires real-time execution. The courts do not distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer. As the Federal Circuit has explained, "[c]ourts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). See also Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318, 120 USPQ2d 1353, 1360 (Fed. Cir. 2016) (‘‘[W]ith the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.’’); Mortgage Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d 1314, 1324, 117 USPQ2d 1693, 1699 (Fed. Cir. 2016) (holding that computer-implemented method for "anonymous loan shopping" was an abstract idea because it could be "performed by humans without a computer"). The applicant further argues the invention integrates into a practical application as it provides for a specific workflow for operating a physical system. The examiner notes the claim as recited is very generic and while it can be argued that it does apply to physical systems, the claim language itself, in its current form, does not integrate it into a practical application. Amending the claim language to integrate practical applications of the invention would be recommended. Applicant's arguments have been fully considered but they are not persuasive. Rejection under 35 USC 101 is maintained. Response to Arguments – 35 USC 103 Applicant’s arguments with respect to the 103 rejections have been considered, but are moot in view of the new ground(s) of rejection provided below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8, 10-19, and 21-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 Step 1: Statutory class – machine. Step 2A Prong One: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes “3) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III).” MPEP § 2106.04(a). The claims are directed to an abstract idea of data processing and analysis. The claim recites: an optimizer that determines initial node values for a simulator, the simulator comprising nodes with values; a reverser that reverses the input time series to time t=(0) to t=(−n), to produce a reversed input time series and reverses the output time series to time t=(0) to t=(−n); The determining, and reversing limitations are mental processes of evaluation and judgement and mathematical calculations. By way of example, one can mentally evaluate and determine initial values for a simulator, additionally, one can receive the output and reverse it to obtain a new reversed time series. Step 2A Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The additional elements are: A computer-enabled learning model training system comprising: a processor; a memory in operable communication with the processor, computing code associated with the processor configured to create a simulator trainer the simulator that uses an input time series from time t=(−n) to time t=(0) as input, and outputs for the nodes an output time series from time t=(−n) to time t=(0); a learning model that uses the reversed input time series as training input and uses selected values of the output time series at t=(−n) as a ground truth for a cost function associated with the learning model, wherein the simulator comprises a heterogenous neural network comprising nodes governed by equations representing physical properties of a modeled system, including at least one of thermal capacitance, resistance, mass or energy transfer The simulator and learning model are mere instructions to apply an exception on a generic computer. Step 2B: Does the claim recite additional elements that amount to significantly more than judicial exception? No, as discussed with respect to Step 2A, the additional limitation are mere instructions to apply an exception on a generic computer and a general purpose computer. They do not impose any meaningful limits on practicing the abstract idea and therefore the claim does not provide an inventive concept in Step 2B. Further, in regards to step 2B and as cited above in step 2A, MPEP 2106.05(g) “Obtaining information about transactions using the Internet to verify credit card transactions, CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir.2011)” is merely data gathering. The additional elements have been considered both individually and as an ordered combination in the significantly more consideration. This claim is ineligible. Claim 3 recites further comprising a cost function determiner that uses selected node values from the output time series as an input into a cost function, and wherein the cost function determiner further comprises the cost function using the ground truth as input into the cost function, which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 4 recites a cost derived from the cost function is used by the optimizer to determine subsequent initial node values, which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 5 recites an iterator which iteratively runs the optimizer, the simulator, and the learning model until a stop state is reached, which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 6 recites when the stop state is reached, the initial node values are used as input into a starting state estimation simulation, which is a mental process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 7 recites the starting state estimation simulation is run from time t=(−n) to time t=(0); wherein a state simulation is then run from time t(0) to t(n); the state simulation produces an output that can be used to produce a control sequence, and wherein the control sequence is used to run a device modeled by the state simulation., which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 8 recites the Learning Model is a neural network, which is mere instructions to apply an exception on a generic computer under Step 2A Prong Two and 2B. Therefore, the claim is considered ineligible under 35 USC 101. Claim 10 recite a computer-enabled method: (statutory category – process) to train a learning model using an optimizer model implemented in a computing system comprising one or more processors and one or more memories coupled to the one or more processors, the one or more memories comprising computer-executable instructions for causing the computing system to perform operations comprising, which is mere instructions to apply an exception on a generic computer under Step 2A Prong Two and 2B. The remaining limitations are similar to claim 1 and are rejected under the same rationale. Therefore, the claim is considered ineligible under 35 USC 101. Claim 11 recites running the learning model produces a reversed time series as learning model output, which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 12 recites the learning model output at time t(−n) is compared with the initial simulator node values in a cost function, which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 13 recites the cost is derived from the cost function, and wherein the cost is used for backpropagation within the learning model, which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 14 recites the simulator is a heterogenous neural network, which is mere instructions to apply an exception on a generic computer under Step 2A Prong Two and 2B. Therefore, the claim is considered ineligible under 35 USC 101. Claim 15 recites the inputs comprises weather data over time, which is a mere data collection under Step 2A Prong 2 and 2B. Therefore, the claim is considered ineligible under 35 USC 101. Claim 16 recites the selected node values are temperature of areas inside a space that the simulator is modeling, which is a mere data collection under Step 2A Prong 2 and 2B. Therefore, the claim is considered ineligible under 35 USC 101. Claim 17 recites reversing the inputs of the simulator comprise reversing time series originally from t=(−n) to time=(0) to time t=(0) to t=(−n), to produce a reversed time series, which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 18 recites computer-readable storage medium: (statutory category – machine) configured with instructions which upon execution by one or more processors to perform a method for training a simulator, the method comprising, which is mere instructions to apply an exception on a generic computer under Step 2A Prong Two and 2B. MPEP § 2106.05(f). The remaining limitations are similar to claim 1 and are rejected under the same rationale. Claim 19 recites limitations similar to claims 13 and is rejected under the same rationale. Therefore, the claims are considered ineligible under 35 USC 101. Claim 21 recites the simulator has nodes and wherein a plurality of nodes have a plurality of properties, which is mere instructions to apply an exception on a generic computer under Step 2A Prong Two and 2B. Therefore, the claim is considered ineligible under 35 USC 101. Claim 22 recite at least some of the plurality of nodes have at least two equations associated with them, which is a mental/mathematical process under Step 2A Prong One. Therefore, the claim is considered ineligible under 35 USC 101. Claim 23 recite at least one property value is used as output, which is mere instructions to apply an exception on a generic computer under Step 2A Prong Two and 2B. Therefore, the claim is considered ineligible under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-8, 10-19 and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US20210191342A1) in view of Srivastava et al. (US-20210133539-A1) and further in view of Sutskever et al. (Sequence to Sequence Learning with Neural Networks) Regarding Claim 1, Lee teaches a processor ([0093] “Still referring to FIG. 4, BMS controller 366 is shown to include a processing circuit 404 including a processor 406”). a memory in operable communication with the processor, computing code associated with the processor configured to create a simulator trainer ([0094] “Memory 408 (e.g., memory, memory unit, storage device, etc.) can include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage, etc.) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present application”). an optimizer that determines initial node values for a simulator ([0121] “to generate simulated experience data, sample input data is input into a dynamics model. The dynamics model can include a calibrated simulation model or a surrogate model of the HVAC system... For input into the dynamics model, the exogenous and endogenous parameters can be sampled to simulate different scenarios. Any sampling algorithm can be used to sample the exogenous and endogenous parameters. In some embodiments, several different sampling algorithms may be used to vary the state-action pair space for training the RL model”). the simulator comprising nodes with values ([0168] “the surrogate model may be trained by data generated by a calibrated simulation model so as to optimize the weights used in the edges and nodes of the neural network…the dynamic model of predictive modeler 602 is a surrogate model, which may include a deep neural network (DNN) model of HVAC dynamics. Surrogate models are generally designed to simulate how a particular system may react to a given input”). the simulator that uses an input time series from time t=(−n) to time t=(0) as input, and outputs for the nodes an output time series from time t=(−n) to time t=(0) ([0139] endogenous parameters may be sampled according to a random pre-cooling algorithm. Given a sequence of time-varying bounds ut min and ut max for t=0, . . . , T with t=0 corresponding to the end of peak hours on the previous day and t=T corresponding to the end of peak hours on the current day … one or more processors executing the random pre-cooling algorithm can generate a sequence ut of a endogenous control parameter that achieves some amount of pre-cooling for an HVAC system. The one or more processors may first randomly choose two values τ1 and τ2 from the uniform distribution on the interval [τmin, τmax], and a value ta from the uniform distribution on the interval [0, tp−τ2]). wherein the simulator comprises a heterogenous neural network ([0004] “the first experience data is generated using a simulation model or a surrogate model of the HVAC system, wherein the surrogate model can include a deep neural network” and [0121] “Exogenous parameters are parameters pertaining to the environment or otherwise outside of the control of the HVAC system, such as time of day, weather and weather forecasts, occupancy schedules, and occupancy trends. Endogenous parameters are variables chosen by a control system, such as setpoints or operating conditions. For input into the dynamics model, the exogenous and endogenous parameters can be sampled to simulate different scenarios.” The variety and distinctness of inputs and variables make the NN heterogenous). Comprising nodes governed by equations representing physical properties of a modeled system, including at least one of thermal capacitance, resistance, mass or energy transfer ([0145-0147] “ The DNN model may be controlled by a zone controller and zone physics that can be split into two independent subsystems for which all the inputs and outputs are known. The two independent subsystems may be independent, but may still be interconnected. Particularly, the local controllers of each zone may induce a sensible cooling load on the zone, which induces zone heat-transfer physics to affect the zone temperature. The zone temperature may then be measured by the controller. The measured zone temperature then may lead to a change in the sensible cooling load on the zone, and then the overall process may repeat.”) Lee, however, doesn’t seem to explicitly teach a reverser that reverses the input time series to time t=(0) to t=(−n), to produce a reversed input time series and reverses the output time series to time t=(0) to t=(−n) a learning model that uses the reversed input time series as training input and uses selected values of the output time series at t=(−n) as a ground truth for a cost function associated with the learning model. Srivastava teaches reverses the output time series to time t=(0) to t=(−n) ([0048] “At 404, based on the simulator's output data and the first result of the generator network, training an inference network of the variational autoencoder to generate a second result, the second result of the trained inference network inverting the first result of the generator and approximating the simulator's input data, the trained inference network functioning as an inverted simulator. In an embodiment, an unsupervised training technique trains the inference network”). a learning model that uses the reversed input time series as training input and uses selected values of the output time series at t=(−n) as a ground truth for a cost function associated with the learning model (Fig. 4 and [0003] “training an inference network of the variational autoencoder to generate a second result, the second result of the trained inference network inverting the first result of the generator and approximating the simulator's input data, the trained inference network functioning as an inverted simulator” [0004] “ train an inference network of the variational autoencoder to generate a second result, the second result of the trained inference network inverting the first result of the generator and approximating the simulator's input data, the trained inference network functioning as an inverted simulator” EN: Please see [0024-0026] for the objective function, specifically eq(3) and eq(4) where the output is used as ground truth, “ To learn the function S 104 faithfully, in an embodiment, GØ is parameterized with a deep neural network. The training can be achieved by minimizing a suitable measure of discrepancy D (depending on the output space of S) on the observations from the two functions on the same input with respect to ϕ”). PNG media_image1.png 445 702 media_image1.png Greyscale Lee and Srivastava are analogous art because they are from the same field of endeavor in machine learning and simulation. Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art, to combine Lee and Srivastava to arrive at reversing the time output series to achieve more accurate NN training and results especially for physical modeling such as HVAC modeling. “In an aspect, a model that can allow inference in both the forward and inverse directions of the simulator, in contrast to traditional numerical simulators which are difficult if not impossible to invert, can be beneficial. Simulators can in general be quite complex, computationally inefficient and discontinuous. An efficient heuristic, smooth version of a simulator that is invertible, can be valuable for applications in downstream reasoning tasks (e.g., by finding the parameters of a physical phenomenon from observing the data), or in technical fields such as circuit design, protein folding, and materials design, among others.” (Srivastava, [0016]) However, Lee doesn’t explicitly teach and Srivastava is not relied upon for: a reverser that reverses the input time series to time t=(0) to t=(−n), to produce a reversed input time series and reverses the output time series to time t=(0) to t=(−n) Sutskever teaches a reverser that reverses the input time series to time t=(0) to t=(−n), to produce a reversed input time series (Pg. 2, “The idea is to use one LSTM to read the input sequence, one timestep at a time, to obtain large fixed dimensional vector representation” Pg. 4, “By reversing the words in the source sentence, the average distance between corresponding words in the source and target language is unchanged. However, the first few words in the source language are now very close to the first few words in the target language, so the problem's minimal time lag is greatly reduced. Thus, backpropagation has an easier time “establishing communication” between the source sentence and the target sentence, which in turn results in substantially improved overall performance,” and Fig. 1 explicitly shows the output being reversed PNG media_image2.png 276 837 media_image2.png Greyscale Lee, Srivastava, and Sutskever are analogous art because they are from the same field of endeavor in machine learning and simulation. Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art, to combine Lee, Srivastava, and Sutskever to arrive at reversing the time input series to achieve more efficient NN training. “LSTMs trained on reversed source sentences did much better on long sentences than LSTMs 4 trained on the raw source sentences (see sec. 3.7), which suggests that reversing the input sentences results in LSTMs with better memory utilization.” (Sutskever, Pg. 4-5) Regarding Claim 3, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 1. Srivastava further teaches a cost function determiner that uses selected node values from the output time series as an input into a cost function, and wherein the cost function determiner further comprises the cost function using the ground truth as input into the cost function (EN: Please see [0024-0026] for the objective function, specifically eq(3) and eq(4) where the output is used as ground truth input in the cost function, “To learn the function S 104 faithfully, in an embodiment, GØ is parameterized with a deep neural network. The training can be achieved by minimizing a suitable measure of discrepancy D (depending on the output space of S) on the observations from the two functions on the same input with respect to ϕ”). Regarding Claim 4, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 3. Srivastava further teaches a cost derived from the cost function is used by the optimizer to determine subsequent initial node values ([0024-0027] “The training can be achieved by minimizing a suitable measure of discrepancy D (depending on the output space of S) on the observations from the two functions on the same input with respect to ϕ, as shown below: eq(3) … In an embodiment, since the domain of S is infinite in practice, the optimization problem can be solved using mini-batches in a stochastic gradient descent first order optimization method such as Adaptive Moment Estimation (Adam). Upon successful training of the generator, Gϕ≈S, the next step in the simulator-assisted training of VAE in an embodiment can be to train an inference network 108 … to invert the generator 106, which in turn, if successfully trained, provides an approximate inversion of the simulator 104 and as a result a disentangled and interpretable representation of the latent space 110. In an embodiment, the simulator-assisted training of VAE can do that via the following objective:” eq(4) ). Regarding Claim 5, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 4. Lee further teaches an iterator which iteratively runs the optimizer, the simulator, and the learning model until a stop state is reached ([0194] T is the number of states considered in a policy gradient trajectory, with t as the iterative variable over the trajectory, M is the number of samples considered with m as the iterative variable … Qθ π(st, at)−Vθ π(st) defines the reward of taking action at in state st (with Qθ π being a Q function of future projected value, Vθ π being a value function of the current state). Using the gradient of J(θ), the reward function can be maximized to derive an actionable policy for a given state or trajectory of states and actions” and [0195] “Once the RL trainer 608 has satisfied the stop condition for training, the RL model 606 can be used by controller 610 to control HVAC system 612”). Regarding Claim 6, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 5. Lee further teaches when the stop state is reached, the initial node values are used as input into a starting state estimation simulation ([0144] “the internal loads, operating strategies, and building parameters of the HVAC system can be loaded into a simulation engine to model building zones and thermal dynamics for the HVAC system in its environment” and [0195] “Once the RL trainer 608 has satisfied the stop condition for training, the RL model 606 can be used by controller 610 to control HVAC system 612. In controlling HVAC system 612, controller 610 receives state information from HVAC system 612, inputs the state or potential action into the RL model 606, and sends a control action based on the determined output of RL model 606”). Regarding Claim 7, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 6. Lee further teaches the starting state estimation simulation is run from time t=(−n) to time t=(0); wherein a state simulation is then run from time t(0) to t(n) ([0144] “Weather data may also be introduced to simulate loads over a week, month, or year, for example.” The examiner picks the first point to be -n, the middle point to be t(0) and the end point to be n). the state simulation produces an output that can be used to produce a control sequence, and wherein the control sequence is used to run a device modeled by the state simulation ([0195] “controller 610 receives state information from HVAC system 612, inputs the state or potential action into the RL model 606, and sends a control action based on the determined output of RL model 606”). Regarding Claim 8, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 1. Lee further teaches the Learning Model is a neural network ([0174] “RL model 606 is a Q-Learning model… In some embodiments, the Q-Learning model is a deep Q-Learning model wherein the model uses a neural network). Regarding Claim 10, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 8. Lee further teaches a computer-enabled method to train a learning model using an optimizer model implemented in a computing system comprising one or more processors and one or more memories coupled to the one or more processors, the one or more memories comprising computer-executable instructions for causing the computing system to perform operations comprising ([0086] “BMS controller 366 can include one or more computer systems (e.g., servers, supervisory controllers, subsystem controllers, etc.) that serve as system level controllers, application or data servers, head node”). The remaining claim limitations are similar to claim 1 and are rejected under the same rationale. Regarding Claim 11, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 10. Srivastava further teaches running the learning model produces a reversed time series as learning model output ([0016 and 0048] When reversing the input as taught by Sutskever, the output of Srivastava is also reversed and is used for the learning model per Srivastava). Regarding Claim 12, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 11. Srivastava further teaches the learning model output at time t(−n) is compared with the initial simulator node values in a cost function ([0024-0027] eq(3) and eq(40 show the output is compared to the initial values to calculate the cost and subsequently minimize it). Regarding Claim 13, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 12. Lee further teaches the cost is derived from the cost function, and wherein the cost is used for backpropagation within the learning model ([0190] “the RL model may be initialized as a deep neural network, in which experience data is received as input into the model and a Q value for the input experience data is generated as the output. The Q function neural network may be trained using backpropagation to train weights of the network such that the Q function produces more accurate outputs to the expected output. The ideal Q value Q* may be calculated as…” where Q is the cost in the cost function). Regarding Claim 14, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 13. Lee further teaches the simulator is a heterogenous neural network ([0004] “the first experience data is generated using a simulation model or a surrogate model of the HVAC system, wherein the surrogate model can include a deep neural network” and [0121] “Exogenous parameters are parameters pertaining to the environment or otherwise outside of the control of the HVAC system, such as time of day, weather and weather forecasts, occupancy schedules, and occupancy trends. Endogenous parameters are variables chosen by a control system, such as setpoints or operating conditions. For input into the dynamics model, the exogenous and endogenous parameters can be sampled to simulate different scenarios.” The variety and distinctness of inputs and variables make the NN heterogenous). Regarding Claim 15, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 14. Lee further teaches the inputs comprises weather data over time ([0144] “Weather data may also be introduced to simulate loads over a week, month, or year, for example”). Regarding Claim 16, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 15. Lee further teaches the selected node values are temperature of areas inside a space that the simulator is modeling ([0156] “the zone sub-model outputs include both zone temperature and VAV airflow, which are components of y and z respectively for the overall prediction system of the DNN model”). Regarding Claim 17, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 16. Sutskever further teaches reversing the inputs of the simulator comprise reversing time series originally from t=(−n) to time=(0) to time t=(0) to t=(−n), to produce a reversed time series (Fig. 1 shows the input reversing such that the timesteps are reversed). Regarding Claim 18, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 8. Lee further teaches A computer-readable storage medium configured with instructions which upon execution by one or more processors to perform a method for training a simulator, the method comprising ([0044] “another one or more computer readable storage media are disclosed, the one or more storage media having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to”). The remaining claim limitations are similar to claim 1 and are rejected under the same rationale. Claim 19 recites limitations similar to claim 13 and is rejected under the same rationale. Regarding Claim 21, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 10. Lee further teaches the simulator has nodes and wherein a plurality of nodes have a plurality of properties (This is the definition of a heterogenous NN covered in claim 1. Explicitly, [0004] “the first experience data is generated using a simulation model or a surrogate model of the HVAC system, wherein the surrogate model can include a deep neural network” and [0121] “Exogenous parameters are parameters pertaining to the environment or otherwise outside of the control of the HVAC system, such as time of day, weather and weather forecasts, occupancy schedules, and occupancy trends. Endogenous parameters are variables chosen by a control system, such as setpoints or operating conditions. For input into the dynamics model, the exogenous and endogenous parameters can be sampled to simulate different scenarios.” The variety and distinctness of inputs and variables make the NN heterogenous). Regarding Claim 22, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 21. Lee further teaches recite at least some of the plurality of nodes have at least two equations associated with them ([0146] showcases a possible composite function containing multiple equations associated with the network’s nodes. PNG media_image3.png 227 365 media_image3.png Greyscale ). Regarding Claim 23, Lee in view of Srivastava and further in view of Sutskever teaches the method of claim 21. Lee further teaches at least one property value is used as output ([0147] “The current output of the physics sub-model, the zone temperature, may be an input to the controller sub-model. In some embodiments, the controller sub-model may then output the next predicted output, such as the average zone heating and cooling duties over a current interval, which may then be used as inputs to the physics sub-model”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US-20220231912-A1: Discloses optimization, calibration of RL with time series. Unsupervised Learning of Video Representations using LSTMs: This NPL by Srivastava discloses explicitly reversing of output. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR DARWISH whose telephone number is (571)272-4779. The examiner can normally be reached 7:30-5:30 M-Thurs. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached on 571-272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.E.D./Examiner, Art Unit 2187 /LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Feb 09, 2022
Application Filed
Jul 03, 2025
Non-Final Rejection — §101, §103
Oct 01, 2025
Response Filed
Oct 29, 2025
Final Rejection — §101, §103
Jan 06, 2026
Response after Non-Final Action
Jan 22, 2026
Request for Continued Examination
Jan 29, 2026
Response after Non-Final Action
Feb 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12475391
METHOD AND SYSTEM FOR EVALUATION OF SYSTEM FAULTS AND FAILURES OF A GREEN ENERGY WELL SYSTEM USING PHYSICS AND MACHINE LEARNING MODELS
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+66.7%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month