DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1, 8, 10, 17, 19 were amended. Claims 5, 6, 9, 14, 15, 18 are canceled. Claims 21 – 26 are new. Claims 1 – 4, 7, 8, 10 – 13, 16, 17, 19 – 26 are pending and are examined herein.
Claims 1 – 4, 7, 8, 10 – 13, 16, 17, 19 – 26 are rejected under 35 U.S.C. 101.
Claims 1 – 4, 7, 8, 10 – 13, 16, 17, 19 – 26 are rejected under 35 U.S.C. 103.
Response to Amendment
The amendment filed November 13th, 2025 has been entered. Claims 1, 4, 5, 6, 12, 14, and 15 were amended. Claims 16-19 are new. Claims 1 – 4, 7, 8, 10 – 13, 16, 17, 19 – 26 are pending and are examined herein. Applicant’s amendments to the claims have overcome each and every objection and 112(b) rejection previously set forth in the Non-Final Rejection Office Action mailed August 20th, 2025.
Response to Arguments
Applicant's arguments filed November 13th, 2025 regarding the 35 U.S.C. 101 rejection of claims 1 – 4, 7, 8, 10 – 13, 16, 17, 19 – 26 have been fully considered but they are not persuasive. Applicant argues, on pages 10-11, that amended claim is directed to an improvement in a technical field.
Claim 1 still describes, at a high level, creating a logical graph with subgraphs that flow into one another and alternative subgraphs to represent how data is generated, and then simulating paths through that graph to generate synthetic data representative of real data. Those steps are directed to abstract concepts, including conceptual modeling of information and algorithmic simulation of path selection. The claim also recites use of a GUI and an AI model, but these are stated only in general terms and serve as generic tools for carrying out the abstract idea, rather than as a specific improvement to GUI, processing graph, simulation, or AI model training. In addition, the use of synthetic data to train other AI models is an intended use and does not make the claim eligible. For these reasons, claim 1 remains directed to an abstract idea, and the additional elements, considered individually and in combination, do not amount to significantly more.
Applicant's arguments filed November 13th, 2025 regarding the 35 U.S.C. 103 rejection of claims 1 – 4, 7, 8, 10 – 13, 16, 17, 19 – 26 have been fully considered but they are not persuasive.
Applicant argues that Gutierrez (US 11847390) and Dechene (US 2022/0245462) do not teach the amend claim 1 “guiding, using a graphical user interface (GUI), a user to generate a logical graph including a plurality of subgraphs that represents how real data is generated in a situation, wherein subsequent subgraphs flow logically into each other to create the logical graph, and wherein at least some subgraphs of the plurality of subgraphs are alternatives of each other within the situation;”. The amended limitation sets included in claim 1 were moved from canceled claim sets 5, 6, which were rejected with the combination from Floren (US 2022/0075515). Therefore, these amended claims are still taught with Floren reference in combination with Gutierrez and Dechene. Thus, the 103 rejection is maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 – 4, 7, 8, 10 – 13, 16, 17, 19 – 26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject matter. The analysis of claims 1 – 4, 7, 8, 10 – 13, 16, 17, 19 – 26, in accordance with these steps, follows.
Step 1 Analysis:
Step 1 is to determine whether the claim is directed to a statutory category (process, machine, manufacture, or composition of matter.
Claims 1 – 4, 7, 8, 21 – 26 are directed to a method, meaning that it is directed to the statutory category of process. Claims 10 – 13, 16, 17 are directed to a system, which is also the statutory category of machine. Claims 19 – 20 are directed to a computer program product, which can be an article of manufacture.
Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis:
Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101.
Regarding claim 1, the following claim elements are abstract ideas:
,wherein subsequent subgraphs flow logically into each other to create the logical graph, (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components or by a human using a pen and paper.)
and wherein at least some subgraphs of the plurality of subgraphs are alternatives of each other within the situation; (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components or by a human using a pen and paper.)
and simulating the situation a number of times by having the AI model choose paths through the logical graph to generate synthetic data that is representative of the real data, such that the synthetic data can be used for training other AI models. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception:
guiding, using a graphical user interface (GUI), a user to generate a logical graph including a plurality of subgraphs that represents how real data is generated in a situation; (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
using an artificial intelligence (AI) model, (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, claim 2 recites the following abstract idea:
Adding additional nuanced steps within the logical graph. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
Claim 2 further recites following additional elements:
wherein the logical graph is a simplified version of the situation, (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
,via analyzing the simulations of the AI model, (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 3, the rejection of claim 1 is incorporated herein. Further, claim 3 recites the following additional elements:
receiving positive or negative rewards associated with steps of the logical graph (Receiving positive or negative rewards is a well-understood, routine conventional activity in the field of AI. It does not integrate the judicial exception into a practical application. See MPEP § 2106.05(d). Therefore, this does not amount to significantly more than the judicial exception.)
the positive or negative rewards define why agents of the AI model would choose different paths of the logical graph (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
training the AI model by having the agents of the AI model choose the different paths based on the positive or negative rewards. (Training AI model to make choice based on the positive or negative rewards is a well-understood, routine conventional activity in the field of AI. It does not integrate the judicial exception into a practical application. See MPEP § 2106.05(d). Therefore, this does not amount to significantly more than the judicial exception.)
Regarding claim 4, the rejection of claim 3 is incorporated herein. Further, claim 4 recites the following abstract idea:
the logical graph includes a plurality of nodes connected via edges, (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components.)
Claim 4 further recites following additional element:
using a graphical processing unit to graphically simulate the situation the number of times using the AI model. (Using AI model and GPU to graphically simulate is a well-understood, routine conventional activity in the field of AI. It does not integrate the judicial exception into a practical application. See MPEP § 2106.05(d). Therefore, this does not amount to significantly more than the judicial exception.)
Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, claim 7 recites the following additional element:
training another AI model with the synthetic data such that no real-world data is used to train the another AI model. (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 8, the rejection of claim 1 is incorporated herein. Further, claim 8 recites the following additional element:
guiding the user to generate the logical graph includes recommending one or more steps for the user to add into the logical graph. (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 10, the following claim elements are additional elements:
a processor; and a memory in communication with the processor, the memory containing instructions that, when executed by the processor (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
The rest of claims 10 and 11 – 18 recite substantially similar subject matter to claims 1 and 2 – 9 respectively and are rejected with the same rationale, mutatis mutandis.
Regarding claim 19, the following claim elements are additional elements:
the computer program product comprising a computer readable storage medium having program instructions embodied therewith (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
The rest of claims 19 and 20 recite substantially similar subject matter to claims 1 and 2 respectively and are rejected with the same rationale, mutatis mutandis.
Regarding claim 21, the rejection of claim 1 is incorporated herein. Further, claim 21 recites the following abstract idea:
and wherein the AI model chooses paths during the simulating based on factors created by the user within the GUI, (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components or by a human using a pen and paper.)
Claim 21 further recites following additional element:
wherein the AI model is a reinforcement learning model, (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
wherein the factors created by the user are categorial factors or continuous factors. (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 22, the rejection of claim 21 is incorporated herein. Further, claim 22 recites the following abstract ideas:
setting a statistical relationship between the factors (Setting a statistical relationship between the factors recites a mathematical relationship, which is mathematical concept.)
creating one or more simulation states; and (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components or by a human using a pen and paper.)
creating one or more simulation flow graphs … based on the one or more simulation states created, and wherein each node in the one or more simulation flow graphs represents a simulation step and each edge establishes a way-point between two states. (This is practical to perform in the human mind under its broadest reasonable interpretation aside from the recitation of generic computer components or by a human using a pen and paper.)
Claim 21 further recites following additional element:
created by the user within the GUI; (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
using a node-graph canvas (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 23, the rejection of claim 22 is incorporated herein. Further, claim 23 recites the following additional element:
detecting, by a controller, the user connecting two state nodes; (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
and prompting, by the controller, the user within the GUI to add rewards to reinforce a particular outcome for a simulation using logical operators, (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
wherein the user is provided different quantities of the rewards to the way-point between the two states. (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 24, the rejection of claim 23 is incorporated herein. Further, claim 24 recites the following additional element:
providing a review dashboard to the user within the GUI, (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
wherein the review dashboard includes sectional insights to different aspects of a configuration of the simulation. (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 25, the rejection of claim 24 is incorporated herein. Further, claim 25 recites the following additional element:
wherein the sectional insights include at least factors, statistical relationships, time series, the one or more simulation states, and the rewards for the logical graph, and wherein the GUI further enables the user to combine the simulation with cumulative metrics of previously run simulations. (These are mere instructions to apply abstract idea on a generic computer. See MPEP § 2106.05(f). Therefore, this does not amount to significantly more than the judicial exception and does not integrate the abstract idea into a practical applications.)
Regarding claim 26, the rejection of claim 21 is incorporated herein. Further, claim 26 recites the following abstract idea:
retraining, by a controller, the reinforcement learning model based on edits to rewards or simulation flows by the user. (Iterative training of reinforcement learning with rewards merely recites mathematical relationship, which is mathematical concept.)
Claim 26 does not recite additional elements.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3, 7, 10 – 12, 16, 19 – 22 are rejected under 35 U.S.C. 103 as being unpatentable over Floren et al. (U.S. Pub. 2022/0075515) in view of Dechene et al. (U.S. Pub. 2022/0245462), further in view of Gutierrez et al. (U.S. Pub. 11847390).
Regarding Claim 1, Floren teaches
A computer-implemented method comprising: guiding, using a graphical user interface (GUI), a user to generate a logical graph including a plurality of subgraphs that represents how real data is generated in a situation, wherein subsequent subgraphs flow logically into each other to create the logical graph, and wherein at least some subgraphs of the plurality of subgraphs … ; ([0167] of Floren states “Referring to FIG. 8A, an example user interface 800 includes an interactive graph section 802 in which various systems, subsystems, and data objects can be represented by nodes or indicators, such as icons 804 and 806. For ease of description, the information shown in the GUIs of the present disclosure is generally referred to as objects, but as noted various systems and subsystems may similarly be represented. As described throughout the present disclosure, the systems, subsystems, and objects may represent various things, such as people, locations, facilities, and the like. Relationships among the various systems, subsystems, and objects are represented by edges, such as edge 808, which may optionally be directional (or bi-directional) to indicate, e.g., flows of information or items.” [0168] of Floren states “The example user interface 800 provides a view of a simulated technical system representing a real-world system. The view may include various technical systems, subsystems, objects, and the like. Although not shown in the user interface, the system can associate various data, including time-based data, and models with the systems, subsystems, and objects, such that simulations can be run.” [0183] of Floren states “Referring to FIGS. 8I-8J, example user interface portions 860-862 are shown which may comprise portions of, or updates to, user interface 800. The user interface portions 860-862 illustrate system functionality related to subgraphs. Subgraphs provide another way to abstract away parts of a larger, more complicated graph. User interface portion 860 illustrates that the user can select to create a subgraph from the ‘. . . ’ menu on the top navigation bar breadcrumbs. In other implementations other buttons or GUI functionality may be provided for the user to create a subgraph. In response, in user interface portion 861, which can comprise an overlaid GUI portion, or a separate GUI portion, the user can fill in details of the subgraph just like a regular graph, can name the subgraph, and can then link the subgraph back to the parent graph. User interface portion 862 (of FIG. 8J) illustrates that, back in the parent graph, the user can link the created subgraph to the parent graph (e.g., the user can add a ‘Region’ to link to the subgraph on click). The user can also add any number of ‘Text’ or ‘Note’ items to achieve the desired view. In various implementations, the user may also link the subgraph to related objects of the parent graph.”)
Floren does not explicitly teach
alternatives of each other within the situation
and simulating, using an artificial intelligence (AI) model, the situation a number of times by having the AI model choose paths through the logical graph to generate synthetic data that is representative of the real data, such that the synthetic data can be used for training other AI models.
However, Dechene teaches that
alternatives of each other within the situation ([0162] of Dechene states “In several embodiments, the reinforcement-learning model can include a hierarchical reinforcement learning model, as shown in FIGS. 5 and 9, and described above. In several embodiments, multiple alternative versions of the routing agent model can be trained on traffic generated from different traffic profiles.” [0155] of Dechene states “A model can be set as the primary, with additional models set as alternates. Alternate models can allow the system to quickly rollback in the event of model failure, while also maintaining different models to be quickly applied during operational scenarios” Floren teaches subgraphs in a graph based GUI, including creating subgraphs and linking subgraphs to a parent graph. Dechene teaches multiple alternative versions of an AI routing agent model trained under different traffic profiles and further teaches alternate models for different operational scenarios. Accordingly, it would have been obvious to represent at least some of Floren’s subgraphs as alternative subgraphs (e.g., alternative scenario branches) for the same simulated situation. )
and simulating, using an artificial intelligence (AI) model, the situation a number of times by having the AI model choose paths through the logical graph ([0056] of Dechene states “In many embodiments, training system 320 can generate an AI agent model that can be published to network control system 315 to make routing decisions. Training system 320 can include a reinforcement learning service 321, a digital twin service 322, a network traffic service 323, a policy service 324, a training service 325, and/or a traffic classification service 326. In many embodiments, training system 320 can be run by a reinforcement learning (RL) service, such as a Deep-Q Meta-Reinforcement Learning service, which can seek to train the AI agent.” [0168] of Dechene states “In a number of embodiments, method 2000 further can include an activity 2040 of training the routing agent model on the digital twin network simulation using the reinforcement-learning model on traffic that flows through nodes of the digital twin network simulation. The routing agent model can be similar or identical to AI agent routing service 316 (FIG. 3), agent 430 (FIGS. 4 and 7), agent 530 (FIG. 5), agent model 1251 (FIG. 12), and AI agent 1264 (FIG. 12)… In some embodiments, the routing agent model can include a machine-learning model, such as a neural network, a random forest model, a gradient boosted model, and/or another suitable model. In a number of embodiments, the reinforcement-learning model can include a deep-Q meta-reinforcement learning model.” And [0178] of Dechene states “[0178] In some embodiments, the user interface further can include second interactive elements configured to define a network topology.. In a number of embodiments, the definitions of the network topology can include one or more of discovering and importing an existing network topology, creating a new network topology, or modifying an existing network topology. In many embodiments, the routing agent model can be trained using the reinforcement learning model based on traffic that flows through nodes of the network topology.” And [0084] of Dechene states “Synthetic traffic can be generated within the digital twin model for direct training usage in RL agent actions and rewards, or as noise that serves as competing traffic against the legitimate training traffic.” [0201] of Dechene states “In several embodiments, method 2200 further can include, after block 2250, an activity 2255 of determining an entire path of nodes of the physical computer network for the flow.”)
Gutierrez teaches that
to generate synthetic data that is representative of the real data, such that the synthetic data can be used for training other AI models. (Column 8 Lines 49 – 62 of Gutierrez states “A generative model, as used herein, is used to describe models that generate instances of output variables that may be used for machine learning. A generative model may generate synthetic data that may be input into various machine learning models. A generative model may be referred to as a representation of a data distribution that may be used to generate data points. In some situations, a good generative model may be treated as a source of synthetic data—e.g., data that is realistic but not actual, real-world data. Multiple approaches exist for generating synthetic data including, but not limited to, generative adversarial networks, variational auto encoders, probabilistic graphical models, and agent-based models.”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings from Floren with the combination of Gutierrez and Dechene. Floren teaches an interactive node/edge graph GUI with subgraphs for representing and modifying a simulated technical system. Gutierrez teaches simulation state generation and generation of synthetic data from simulation states. Dechene teaches use of reinforcement learning model in a simulated environment to make decisions and train a model, including dashboard functionality for simulation results. One with the ordinary skill in the art would be motivated to incorporate the teachings of Gutierrez and Dechene into Floren to improve Floren’s graph-based simulation interface with explicit simulation state semantics and configurable statistical relationships, and reinforcement learning driven path decisioning, user adjustable factors/reward behavior, and simulation review functionality through dashboard. These are directed to compatible computer implemented simulation workflows and would have predictably improved the ability to define, execute, and iteratively refine simulations for AI-driven applications.
Regarding Claim 3, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Floren, Gutierrez and Dechene teaches
guiding the user to generate the logical graph includes receiving positive or negative rewards associated with steps of the logical graph, wherein: the positive or negative rewards define why agents of the AI model would choose different paths of the logical graph; (Paragraph [0036] of Dechene states “The user interface can include one or more first interactive elements… The inputs include one or more modifications of at least a portion of the one or more first interactive elements of the user interface to update the policy settings of the reinforcement learning model. The method additionally can include training a neural network model using a reinforcement learning model with the policy settings as updated by the user to adjust rewards assigned in the reinforcement learning model.” And paragraph [0062] of Dechene states “An AI agent, such as an agent 430, takes a series of actions (e.g., an action 421) within an episode (e.g., 410), known as steps (e.g., a step 420). Each action (e.g., 421) can be informed by observations of a state 432 of an environment 440 (e.g., a training or live environment of a computer network) and an expected reward (e.g., a reward 423).”)
and simulating the situation includes training the AI model by having the agents of the AI model choose the different paths based on the positive or negative rewards. (Paragraph [0034] of Dechene states “The method also can include training a routing agent model on the digital twin network simulation using a reinforcement-learning model on traffic that flows through nodes of the digital twin network simulation. The routing agent model includes a machine-learning model.”)
Regarding Claim 7, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Floren, Gutierrez and Dechene teaches
training another AI model with the synthetic data such that no real-world data is used to train the another AI model. (Column 37, Lines 60 – 65 of Gutierrez states “The synthetic dataset may be used in various ways including, for instance, training another machine learning model, modeling a database, or comparing the synthetic dataset with other datasets to possibly determine whether the other datasets represent actual data or synthetic data.” Column 8 Lines 49 – 62 of Gutierrez states “A generative model, as used herein, is used to describe models that generate instances of output variables that may be used for machine learning. A generative model may generate synthetic data that may be input into various machine learning models. A generative model may be referred to as a representation of a data distribution that may be used to generate data points. In some situations, a good generative model may be treated as a source of synthetic data—e.g., data that is realistic but not actual, real-world data. Multiple approaches exist for generating synthetic data including, but not limited to, generative adversarial networks, variational auto encoders, probabilistic graphical models, and agent-based models.”)
Claims 10 – 12, 16 recite substantially similar subject matter as claims 1 – 3 and 7 respectively, and are rejected with the same rationale, mutatis mutandis.
Claims 19 – 20 recite substantially similar subject matter as claims 1 – 2 respectively, and are rejected with the same rationale, mutatis mutandis.
Regarding Claim 21, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Floren, Gutierrez and Dechene teaches
wherein the AI model is a reinforcement learning model, and wherein the AI model chooses paths during the simulating based on factors created by the user within the GUI, wherein the factors created by the user are categorical factors or continuous factors. ([0034] of Dechene states “The method also can include training a routing agent model on the digital twin network simulation using a reinforcement-learning model on traffic that flows through nodes of the digital twin network simulation. The routing agent model includes a machine-learning model. “ [0036] of Dechene states “The method also can include receiving one or more inputs from the user. The inputs include one or more modifications of at least a portion of the one or more first interactive elements of the user interface to update the policy settings of the reinforcement learning model. The method additionally can include training a neural network model using a reinforcement learning model with the policy settings as updated by the user to adjust rewards assigned in the reinforcement learning model.” [0056] of Dechene states “In many embodiments, training system 320 can generate an AI agent model that can be published to network control system 315 to make routing decisions. Training system 320 can include a reinforcement learning service 321, a digital twin service 322, a network traffic service 323, a policy service 324, a training service 325, and/or a traffic classification service 326. In many embodiments, training system 320 can be run by a reinforcement learning (RL) service, such as a Deep-Q Meta-Reinforcement Learning service, which can seek to train the AI agent. The RL training environment can be based on a simulated digital-twin network topology provided by digital twin service 322, and can augmented with synthetic network traffic provided by network traffic service 323.” [0151] of Dechene states “In several embodiments, the user can specify the training scenario through interactive buttons, sliders, and editable text fields in training scenario component 1730. The user can customize policy tradeoffs and optimize data flow through the network, effectively tuning the RL model and its hyperparameters in accordance with the user's subject matter expertise and intent. Network speed and reliability, priority data type, and expected seasonal traffic variation are examples of the type of dimensions the user can create and modify. Several common training scenarios can be preloaded for users, with support for full customization.” Dechene uses categorical and continuous factors from user (priority data type, speed, reliability, etc.) which affects the training scenarios of RL models when making routing decisions.)
Regarding Claim 22, the rejection of claim 21 is incorporated herein. Furthermore, the combination of Floren, Gutierrez and Dechene teaches
setting a statistical relationship between the factors created by the user within the GUI; (Column 45 Lines 39 – 41 of Gutierrez states “The correlation parameter may comprise one of covariance, interclass correlation, intraclass correlation, or rank.” Column 46 Lines 27 – 33, 39 – 43 of Gutierrez states “generate, based on the data model, a user interface; receive user interactions with the user interface, the user interactions defining relationships between the fields of the data model; generate, based on the relationships, a generative model, wherein the generative model may be configured to generate generated datasets having records arranged in the fields;… determine, based on data in the one or more fields of the generated test dataset, a parameter, wherein the parameter may be one or more of a statistical parameter or a correlation parameter;” [0151] of Dechene states “In several embodiments, the user can specify the training scenario through interactive buttons, sliders, and editable text fields in training scenario component 1730. The user can customize policy tradeoffs and optimize data flow through the network, effectively tuning the RL model and its hyperparameters in accordance with the user's subject matter expertise and intent. Network speed and reliability, priority data type, and expected seasonal traffic variation are examples of the type of dimensions the user can create and modify. Several common training scenarios can be preloaded for users, with support for full customization.” Gutierrez provides the relationship, statistical, and correlation among user defined fields/factors. Dechene further provides the user-created GUI factors. )
creating one or more simulation states; (Column 20 Lines 63 – 68 of Gutierrez states “The simulation specification may be used, with instantiation data, to instantiate instances of agents who are defined in the simulation specification by sampling the simulation specification with a random number generator, resulting in a simulation state. That simulation state may be iteratively sampled, using the random number generator, to perform actions defined in behaviors associated with the instantiated agents. Each sampling of the simulation state may be as a simulation step.”)
and creating one or more simulation flow graphs using a node-graph canvas based on the one or more simulation states created, and wherein each node in the one or more simulation flow graphs represents a simulation step and each edge establishes a way-point between two states. ([0167] of Floren states “Referring to FIG. 8A, an example user interface 800 includes an interactive graph section 802 in which various systems, subsystems, and data objects can be represented by nodes or indicators, such as icons 804 and 806. For ease of description, the information shown in the GUIs of the present disclosure is generally referred to as objects, but as noted various systems and subsystems may similarly be represented. As described throughout the present disclosure, the systems, subsystems, and objects may represent various things, such as people, locations, facilities, and the like. Relationships among the various systems, subsystems, and objects are represented by edges, such as edge 808, which may optionally be directional (or bi-directional) to indicate, e.g., flows of information or items.” [0183] of Floren states “The user interface portions 860-862 illustrate system functionality related to subgraphs. Subgraphs provide another way to abstract away parts of a larger, more complicated graph. User interface portion 860 illustrates that the user can select to create a subgraph from the ‘. . . ’ menu on the top navigation bar breadcrumbs. In other implementations other buttons or GUI functionality may be provided for the user to create a subgraph. In response, in user interface portion 861, which can comprise an overlaid GUI portion, or a separate GUI portion, the user can fill in details of the subgraph just like a regular graph, can name the subgraph, and can then link the subgraph back to the parent graph.” Column 23 Lines 21 – 31 of Gutierrez states “In step 1105, time is set equal to zero (t=0) for the generation of the simulation state. In step 1106, the simulation specification 1100 is sampled to generate the simulation state. As no previous step of the simulation exists, the simulation state is generated based on the probability distribution definitions and other data of the simulation specification 1100. In step 1107, the simulation state of the instantiated agents is stored. If desired, synthetic data may be generated from the simulation state of the instantiated agents (simulation step t=0) and stored in step 1109.” Column 26 Lines 49 – 63 of Gutierrez states “The generating the synthetic dataset simulating may further comprise iteratively simulating additional simulation steps of the agent. The generating the synthetic dataset may be based on the additional simulation steps. The generated synthetic dataset may comprise synthetic data, of the agent instance, from two or more iterative simulation steps. The outputting may comprise streaming, per simulation step, the synthetic dataset. Additional instructions may be received to modify a quantity of the agent instances to be generated in the simulation state and the method may regenerate, based on the modified quantity of agent instances, the simulation state, and the regenerated simulation state may comprise a count of agent instances corresponding to the received modified quantity.” Floren teaches the node/edge graph representation and Gutierrez teaches simulation states and simulation step progression. It would have been obvious to use Floren’s nodes/edges to represent Gutierrez’s simulation steps and transitions between simulation states (i.e., way point between two states))
Claims 4, 13, 26 are rejected under 35 U.S.C. 103 as being unpatentable over Floren et al. (U.S. Pub. 2022/0075515) in view of Dechene et al. (U.S. Pub. 2022/0245462), Gutierrez et al. (U.S. Pub. 11847390), further in view of Mallya Kasaragod et al. (U.S. Pub. 11836577).
Regarding claim 4, the rejection of claim 3 is incorporated herein. The combination of Floren, Gutierrez and Dechene teaches
the logical graph includes a plurality of nodes connected via edges, (Column 10, Lines 45-50 of Gutierrez states “Based on their graphical nature, users are able to modify specific nodes to adjust parameters of variables (e.g., parameters describing the content of individual cells in fields of a database) and to modify specific edges to adjust correlations between the variables (e.g., correlations describing relationships between fields of the database).”)
the combination of Floren, Gutierrez and Dechene does not teach
using a graphical processing unit to graphically simulate the situation the number of times using the AI model.
However, Mallya Kasaragod explicitly teaches
using a graphical processing unit to graphically simulate the situation the number of times using the AI model. (Column 7, Lines 64 – 67 of Mallya Kasaragod states “For instance, the set of simulation parameters may include the batch size for the simulation, which may be used to determine the GPU requirements for the simulation.” And Column 19, Lines 54 – 61 of Mallya Kasaragod states “In an embodiment, based on the simulation parameters and the system parameters, the simulation agent 304 executes one or more visualization applications 310 to allow the customer to interact and visualize the simulation as it is being performed. The one or more visualization applications 310 may generate a graphical representation of the simulation, which may include a graphical representation of the simulation environment.”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings from Mallya Kasaragod with the combination of Floren, Gutierrez and Dechene. Floren teaches an interactive node/edge graph GUI with subgraphs for representing and modifying a simulated technical system. Gutierrez teaches simulation state generation and generation of synthetic data from simulation states. Dechene teaches use of reinforcement learning model in a simulated environment to make decisions and train a model, including dashboard functionality for simulation results. Mallya Kasaragod teaches determining GPU requirements for a simulation and generating a graphical representation of the simulation environment. Using a GPU to execute and visualize simulations is a well-known approach to efficiently compute simulations and provide interactive graphical feedback. One with the ordinary skill in the art would be motivated to incorporate the teachings of Mallya Kasaragod into combination of Floren, Gutierrez and Dechene as it results in predictable improvement of processing speed, visualization quality, and allow AI model to simulate situations multiple times more efficiently. Therefore, combination of Floren, Gutierrez, Dechene, and Mallya Kasaragod would have been obvious for a POSITA.
Claim 13 recites substantially similar subject matter as claim 4 respectively, and is rejected with the same rationale, mutatis mutandis.
Regarding claim 26, the rejection of claim 21 is incorporated herein. The combination of Floren, Gutierrez, Dechene, and Mallya teaches
retraining, by a controller, the reinforcement learning model based on edits to rewards or simulation flows by the user. ([0177] of Dechene states “In a number of embodiments, method 2100 additionally can include an activity 2120 of training a neural network model using a reinforcement learning model with the policy settings as updated by the user to adjust rewards assigned in the reinforcement learning model. The neural network model can be similar or identical to neural network model 431 (FIG. 4) and/or neural network models 531 (FIG. 5).” [0035] of Dechene states “The acts also can include training a routing agent model on the digital twin network simulation using a reinforcement-learning model on traffic that flows through nodes of the digital twin network simulation. The routing agent model includes a machine-learning model. The acts additionally can include deploying the routing agent model, as trained, from the digital twin network simulation to the SDN control system of the physical computer network.” Column 32 Lines 37 – 44 of Mallya states “Accordingly, FIG. 14 shows an illustrative example of a process 1400 for updating a reinforcement training model based on simulation data from a simulation application container in accordance with at least one embodiment. The process 1400 may be performed by the aforementioned training application container, which may execute a training application for training a reinforcement learning model.” Column 10 Lines 8 – 19 of Mallya states “The training of the reinforcement learning model may further take into account the reward value, as determined via the custom-designed reinforcement function, corresponding to the action performed, the initial state, and the state attained via execution of the action. The training application container may provide the updated reinforcement learning model to a simulation application container to utilize in the simulation of the application and to obtain new state-action-reward data that may be used to continue updating the reinforcement learning model.”)
Claims 2, 8, 11, 17, 23 – 25 are rejected under 35 U.S.C. 103 as being unpatentable over Floren et al. (U.S. Pub. 2022/0075515) in view of Dechene et al. (U.S. Pub. 2022/0245462), Gutierrez et al. (U.S. Pub. 11847390), further in view of Kumar et al. (U.S. Pub. 10528327).
Regarding Claim 2, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Floren, Gutierrez and Dechene teaches the logical graph is a simplified version of the situation, the method further comprising: … via analyzing the simulations of the AI model ([0183] of Floren states “Referring to FIGS. 8I-8J, example user interface portions 860-862 are shown which may comprise portions of, or updates to, user interface 800. The user interface portions 860-862 illustrate system functionality related to subgraphs. Subgraphs provide another way to abstract away parts of a larger, more complicated graph. User interface portion 860 illustrates that the user can select to create a subgraph from the ‘. . . ’ menu on the top navigation bar breadcrumbs. In other implementations other buttons or GUI functionality may be provided for the user to create a subgraph. In response, in user interface portion 861, which can comprise an overlaid GUI portion, or a separate GUI portion, the user can fill in details of the subgraph just like a regular graph, can name the subgraph, and can then link the subgraph back to the parent graph.” [0168] of Floren states “The example user interface 800 provides a view of a simulated technical system representing a real-world system. The view may include various technical systems, subsystems, objects, and the like. Although not shown in the user interface, the system can associate various data, including time-based data, and models with the systems, subsystems, and objects, such that simulations can be run. The user interface, based on simulations, can provide a view of how the real-world system has performed in the past, is performing, and can be expected to perform in the future” [0187] of Floren states “Referring to FIGS. 8N-8O, example user interface portions 880, 895 are shown which comprise portions of, or updates to, user interface 800. The user interface portions 880, 895 illustrate additional system functionality related to running simulations. As shown, a simulation panel 881 can display simulation parameters and results. Via the simulation panel 881, the user can specify inputs, outputs, and models (e.g., models 885 and 886). Further, the user can run multiple simulations, as represented by column 889 and additional columns that may be added.”)
However, the combination of Floren, Gutierrez and Dechene does not explicitly teaches
adding, …, additional nuanced steps within the logical graph.
Kumar explicitly teaches that
adding, …, additional nuanced steps within the logical graph. (Column 7 Lines 45 – 49 of Kumar states “The developer may then proceed with further configuring the contents of the workflow, adding workflow steps, modifying workflow steps, removing workflow steps, or the like.” Column 8 Lines 1 – 10 of Kumar states “In one example, step selector 306 may enable a developer to select a step that is associated with a local application, such as Microsoft® Outlook®, or a network-based application, such as Facebook®. Step selector 306 enables the steps to be chained together in a sequence, optionally with conditional steps, for inclusion in workflow logic 120. In step 206, each of the selected steps in the workflow is enabled to be configured. In an embodiment, step configuration UI generator 308 enables configuration of each workflow step in a workflow.”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to combine the teachings from Kumar with the combination of Floren, Gutierrez and Dechene. Floren teaches an interactive node/edge graph GUI with subgraphs for representing and modifying a simulated technical system. Gutierrez teaches simulation state generation and generation of synthetic data from simulation states. Dechene teaches use of reinforcement learning model in a simulated environment to make decisions and train a model, including dashboard functionality for simulation results. Kumar provides the step-selection mechanism into the combined system of guiding a user in creating a logical representation of a process or graph. Kumar teaches it via a “step selector” that displays a menu or list of available steps for inclusion in a workflow. One with the ordinary skill in the art would be motivated to incorporate the teachings of Kumar into combination of Floren, Gutierrez and Dechene in order to improve usability, the step-selection process, and reduce user error when constructing the logical graph. Therefore, the combination of Floren, Gutierrez, Dechene, and Kumar would have resulted in predictable enhancement of the system.
Regarding claim 8, the rejection of claim 1 is incorporated herein. Furthermore, the combination of Floren, Gutierrez, Dechene, and Kumar teaches
guiding the user to generate the logical graph includes recommending one or more steps for the user to add into the logical graph. (Column 7, Lines 58 – 68 of Kumar states “When a developer is editing a workflow, step selector 306 may enable the developer to select workflow steps for inclusion in the workflow, and to order the steps. The workflow steps may be accessed by step selector 306 in workflow library 118. For instance, step selector 306 may display a menu of workflow steps, a scrollable and/or searchable list of available workflow steps, or may provide the workflow steps in another manner, and may enable the developer to select any number of workflow steps from the list for inclusion in the workflow.”)
Claims 11 and 17 recite substantially similar subject matter as claims 2, 8 respectively, and are rejected with the same rationale, mutatis mutandis.
Regarding Claim 23, the rejection of claim 22 is incorporated herein. Furthermore, the combination of Floren, Gutierrez, Dechene, and Kumar teaches
detecting, by a controller, the user connecting two state nodes; ([0167] of Floren states “Referring to FIG. 8A, an example user interface 800 includes an interactive graph section 802 in which various systems, subsystems, and data objects can be represented by nodes or indicators, such as icons 804 and 806. For ease of description, the information shown in the GUIs of the present disclosure is generally referred to as objects, but as noted various systems and subsystems may similarly be represented. As described throughout the present disclosure, the systems, subsystems, and objects may represent various things, such as people, locations, facilities, and the like. Relationships among the various systems, subsystems, and objects are represented by edges, such as edge 808, which may optionally be directional (or bi-directional) to indicate, e.g., flows of information or items.” [0183] of Floren states “The user interface portions 860-862 illustrate system functionality related to subgraphs. Subgraphs provide another way to abstract away parts of a larger, more complicated graph. User interface portion 860 illustrates that the user can select to create a subgraph from the ‘. . . ’ menu on the top navigation bar breadcrumbs. In other implementations other buttons or GUI functionality may be provided for the user to create a subgraph. In response, in user interface portion 861, which can comprise an overlaid GUI portion, or a separate GUI portion, the user can fill in details of the subgraph just like a regular graph, can name the subgraph, and can then link the subgraph back to the parent graph.” Column 23 Lines 21 – 31 of Gutierrez states “In step 1105, time is set equal to zero (t=0) for the generation of the simulation state. In step 1106, the simulation specification 1100 is sampled to generate the simulation state. As no previous step of the simulation exists, the simulation state is generated based on the probability distribution definitions and other data of the simulation specification 1100. In step 1107, the simulation state of the instantiated agents is stored. If desired, synthetic data may be generated from the simulation state of the instantiated agents (simulation step t=0) and stored in step 1109.” Floren teaches a user editable node/edge graph interface and explicitly teaches linking graph/subgraph elements. Detecting a user connection between nodes is inherent in implementing this interface graph linking functionality.)
and prompting, by the controller, the user within the GUI to add rewards to reinforce a particular outcome for a simulation using logical operators, ([0174]of Dechene states “Referring to FIG. 21, method 2100 can include an activity 2110 of transmitting a user interface to be displayed to a user. The user interface can be provided by GUI service 311 of user interface system 310 (FIG. 3), and exemplary displayed of the user interface can be similar or identical to user interface displays 1500 (FIG. 15), 1600 (FIG. 16), 1700 (FIG. 17), 1800 (FIG. 18), and/or 1900 (FIG. 19). In some embodiments, the user interface can include one or more first interactive elements that display policy settings of a reinforcement learning model. For example, the policy settings can be similar or identical to policies 1220 (FIG. 12), and/or the first interactive elements can be similar or identical to one or more of the elements of training scenarios component 1730 (FIG. 17) and/or one or more of the elements of user interface display 1800 (FIG. 18). The reinforcement learning model can be similar or identical to RL model 400 (FIG. 4), HRL model 500 (FIG. 5), Meta-RL model 700 (FIG. 7), and/or RL model 1241 (FIG. 12). In a number of embodiments, the one or more first interactive elements can be configured to allow the user to update the policy settings of the reinforcement learning model. “ [0177] of Dechene states “In a number of embodiments, method 2100 additionally can include an activity 2120 of training a neural network model using a reinforcement learning model with the policy settings as updated by the user to adjust rewards assigned in the reinforcement learning model. The neural network model can be similar or identical to neural network model 431 (FIG. 4) and/or neural network models 531 (FIG. 5). In many embodiments, the neural network model can include a routing agent model configured to control a physical computer network through a software-defined-network (SDN) control system.” Column 10 Lines 16 – 24 of Kumar states “The condition of workflow step 702 enables the workflow to fork based on the determination of a condition (e.g., a variable value). The condition may include an object name, a relationship (e.g., a logical relationship, such as equal to, includes, not equal to, less than, greater than, etc.), and a value, which are all defined by the developer interacting with workflow step 702. Corresponding action steps may be performed depending on which way the workflow forks based on the condition.” Dechene teaches the RL reward adjustment context and user updated policy settings in a GUI. Kumar teaches explicit GUI defined logical relationships/operators used to control workflow outcomes. )
wherein the user is provided different quantities of the rewards to the way-point between the two states. ([0063] of Dechene states “The reward (e.g., 423) can assign reward scores for various actions, such as a score of 1 for using MPLS, in which there is guaranteed success, a score of 3 for using the public internet with enough bandwidth, a score of −2 for using the public internet with limited bandwidth, and a score of −5 for an error in the network.” [0167] of Floren states “Referring to FIG. 8A, an example user interface 800 includes an interactive graph section 802 in which various systems, subsystems, and data objects can be represented by nodes or indicators, such as icons 804 and 806. For ease of description, the information shown in the GUIs of the present disclosure is generally referred to as objects, but as noted various systems and subsystems may similarly be represented. As described throughout the present disclosure, the systems, subsystems, and objects may represent various things, such as people, locations, facilities, and the like. Relationships among the various systems, subsystems, and objects are represented by edges, such as edge 808, which may optionally be directional (or bi-directional) to indicate, e.g., flows of information or items.” [0183] of Floren states “The user interface portions 860-862 illustrate system functionality related to subgraphs. Subgraphs provide another way to abstract away parts of a larger, more complicated graph. User interface portion 860 illustrates that the user can select to create a subgraph from the ‘. . . ’ menu on the top navigation bar breadcrumbs. In other implementations other buttons or GUI functionality may be provided for the user to create a subgraph. In response, in user interface portion 861, which can comprise an overlaid GUI portion, or a separate GUI portion, the user can fill in details of the subgraph just like a regular graph, can name the subgraph, and can then link the subgraph back to the parent graph.” Column 23 Lines 21 – 31 of Gutierrez states “In step 1105, time is set equal to zero (t=0) for the generation of the simulation state. In step 1106, the simulation specification 1100 is sampled to generate the simulation state. As no previous step of the simulation exists, the simulation state is generated based on the probability distribution definitions and other data of the simulation specification 1100. In step 1107, the simulation state of the instantiated agents is stored. If desired, synthetic data may be generated from the simulation state of the instantiated agents (simulation step t=0) and stored in step 1109.” Dechene teaches different reward quantities in an RL context. Combine with Gutierrez and Floren teaches assigning different rewards to graph transitions (way points) between state nodes. )
Regarding Claim 24, the rejection of claim 23 is incorporated herein. Furthermore, the combination of Floren, Gutierrez, Dechene, and Kumar teaches
providing a review dashboard to the user within the GUI, wherein the review dashboard includes sectional insights to different aspects of a configuration of the simulation. ([0157] of Dechene states “In many embodiments, the user can select the current state monitoring option in menu 1910 to monitor the state and/or performance of an AI model once it is deployed on the live network. When the model is deployed, the user can have visibility into the live network through an interactive dashboard, such as dashboard 1940, which can assist in tracking performance against relevant benchmarks, as well as alerting the user to any performance issues or security threats. The dashboard can include metrics and/or visualizations describing the network's health. In some embodiments, a dashboard menu 1941 can allow the user to select various different dashboard display options, such as data, charts, and/or alerts.” [0041] of Floren states “In response, graphical user interfaces (“GUIs”) may be generated that can include, for example, graph-based GUIs, map-based GUIs, and panel-based GUIs, among others. The GUIs may include one or more panels to display data including technical data objects (also referred to herein as “objects”) (e.g., pumps, compressors, valves, machinery, welding stations, vats, containers, products or items, organizations, countries, counties, factories, customers, hospitals, etc.), technical object properties (e.g., flow rate, suction temperature, volume, capacity, order volume, sales amounts, sales quantity during a time period (e.g., a day, a week, a year, etc.), population density, patient volume, etc.), simulations, alerts, recommendations, and the like. The technical objects and technical object properties may represent the inputs and outputs of the simulated models. Various GUIs may further comprise at least one of information, trend, simulation, mapping, schematic, time, equipment, and toolbar panels. Various panels may display the objects, object properties, inputs, and outputs of the simulated models.”)
Regarding Claim 25, the rejection of claim 24 is incorporated herein. Furthermore, the combination of Floren, Gutierrez, Dechene, and Kumar teaches
wherein the sectional insights include at least factors, statistical relationships, time series, the one or more simulation states, and the rewards for the logical graph, and wherein the GUI further enables the user to combine the simulation with cumulative metrics of previously run simulations. ([0151] of Dechene states “In several embodiments, the user can specify the training scenario through interactive buttons, sliders, and editable text fields in training scenario component 1730. The user can customize policy tradeoffs and optimize data flow through the network, effectively tuning the RL model and its hyperparameters in accordance with the user's subject matter expertise and intent. Network speed and reliability, priority data type, and expected seasonal traffic variation are examples of the type of dimensions the user can create and modify. Several common training scenarios can be preloaded for users, with support for full customization.” [0157] of Dechene states “When the model is deployed, the user can have visibility into the live network through an interactive dashboard, such as dashboard 1940, which can assist in tracking performance against relevant benchmarks, as well as alerting the user to any performance issues or security threats. The dashboard can include metrics and/or visualizations describing the network's health. In some embodiments, a dashboard menu 1941 can allow the user to select various different dashboard display options, such as data, charts, and/or alerts.“ [0083] of Floren states “Additionally the interactive user interface may be configured to allow a user to view or edit, in at least one of the displayed panels, unusual (e.g., abnormal) or periodic events that have occurred and/or that may occur in the future during operation of the logical computations, sensors, and/or measuring devices 114 and/or real-world subsystems 112. Such events may also apply to the simulated virtual objects (e.g., virtual items or products, virtual measuring devices, and/or virtual subsystems).” [0178] of Floren states “As shown, the example user interface portion 840 can include a simulation panel 842 that can display simulation parameters and results. Via the simulation panel 842, the user can specify inputs, outputs, and models. Further, the user can run multiple simulations, as represented by columns 843 and 844. As described above, the user can specify which simulations are used to display values in the readouts 833 and 834. Additional details of GUI functionality related to simulations are provided herein, including in reference FIGS. 8N-8O and 9C-9F.”)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BYUNGKWON HAN whose telephone number is (571)272-5294. The examiner can normally be reached M-F: 9:00AM-6PM PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at (571)272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BYUNGKWON HAN/ Examiner, Art Unit 2121
/Li B. Zhen/ Supervisory Patent Examiner, Art Unit 2121