Prosecution Insights
Last updated: April 19, 2026
Application No. 18/203,732

GENERATING EXTREME BUT PLAUSIBLE SYSTEM RESPONSE SCENARIOS USING GENERATIVE NEURAL NETWORKS

Non-Final OA §103§112
Filed
May 31, 2023
Examiner
ALABI, OLUWATOSIN O
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Nasdaq Technology AB
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
85%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
116 granted / 199 resolved
+3.3% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
45 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
40.0%
+0.0% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
23.2%
-16.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 199 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant claims the benefit of prior-filed U.S. provisional patent application number 63/406,282, filed on September 14, 2022, which is acknowledged. Drawings The drawings were received on 05/31/2023. These drawings are acceptable. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the term equilibrium as claimed, in the limitation “generating extreme but plausible scenarios” is considered a relative term which renders the claim indefinite. Furthermore, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Regarding the claims that depend from claim 1, claims 2-10, the claims fail to resolve the noted deficiency and in some case also contain the noted problematic term. These claims are thus rejected under the same rationale. Regarding claim 9, the term extreme as claimed in the limitation “wherein the convergence value corresponds to an equilibrium being reached between the first neural network and the second neural network” is considered a relative term which renders the claim indefinite. Furthermore, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Regarding claims 11 and 17, the limitations are similar to claim 1 and are thus rejected under the same rationale. Regarding the claims that depend from claim 11, claims 12-16, the claims fail to resolve the noted deficiency and in some case also contain the noted problematic term. These claims are thus rejected under the same rationale. Regarding claim 13, the limitations are similar with claim 9 and are thus rejected under the same rationale. Regarding the claims that depend from claim 17, claims 18-20, the claims fail to resolve the noted deficiency and in some case also contain the noted problematic term. These claims are thus rejected under the same rationale. Regarding claim 19, the limitations are similar with claim 9 and are thus rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 10-12, 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (US 20230359193, hereinafter ‘Kumar’) in view of Jain et al. (US 20230334299, hereinafter ‘Jain’). Regarding independent claim 1, Kumar teaches an apparatus including a generative neural network for generating extreme but plausible scenarios for a system with multiple data categories, the apparatus comprising: one or more hardware processors; one or more memories in communication with the one or more hardware processors; wherein: the one or more hardware processors and the one or more memories are configured to: (in [0057] FIG. 2B with reference to FIG. 2A, illustrates an exemplary representation of modelling system 106/user equipment 102 for facilitating prediction of failures associated with gas extraction systems, in accordance with an embodiment of the present disclosure. In an aspect, the system (106)/user equipment 102 may comprise one or more processor(s) 202. The one or more processor(s) 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, baseband digital processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) 202 may be configured to fetch and execute computer-readable instructions stored in a memory 204 of the system 106… [0060] The processing engine 208 may include one or more engines selected from any of a data acquisition engine 212, a feature generation engine 214, Generative Adaptive Network (GAN) engine 216, prediction engine 218 and other engines (220). In an embodiment, the data acquisition engine 212 may enable acquire a set of data packets from one or more sensors 110...) determine evaluation change events for the multiple data categories for repeated time intervals over a time period to produce training data; (in as depicted in Fig.5 and [0071] The above system takes sensor input 502-1, 502-2 and 502-3 as raw data for data acquisition 504 […for the multiple data categories for repeated time intervals over a time period to produce training data], cleaning 506 and labelling 510 and feed the feature generation 508 data to the GAN optimizer engine 512 processed in parallel to provide the output for the optimized failure prediction solution… [0072] The above system has broadly following steps— [0073] Feature extraction from time series of different sensors [0074] Data labelling process [0075] Training of GAN Models [0076] Inference pipeline… [0085] In an embodiment, for each gas well, workover start date (ws.sup.start) and workover end date (ws.sup.end) are available as a csv file which are used to mark each observation computed in feature extraction process. All observations between workover start date and workover end date are marked as failure condition data. Also, a window (W days) of observations before workover start date are also marked as failure in order to enable ahead failure prediction. All other observations outside of the window (ws.sup.start−W, ws.sup.end) are marked as observations belonging to good condition data. After labelling process we have data in the format {x.sup.t, y.sup.t}, where y.sup.t takes value either good or failure [determine evaluation change events for the multiple data categories for repeated time intervals over a time period to produce training data]. determine one or more training data sets based on the training data; (in [0061] In an embodiment, the GAN engine 216 may include machine learning techniques where given a training set, this technique learns to generate new data with the same statistics as the training set [determine one or more training data sets based on the training data]. For example, a GAN trained on data can generate new events that look at least superficially authentic to human observers, having many realistic characteristics. Though originally proposed as a form of generative model for supervised learning, GANs have also proven useful for semi-supervised learning, unsupervised learning, and reinforcement learning. In an exemplary embodiment, the GAN engine can be configured to analyse each set of data packets received from the sensors [determine one or more training data sets based on the training data]. [0062] In an embodiment, the prediction engine (218) may include machine learning methodologies using Gaussian process. The Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e., every finite linear combination of them is normally distributed…) (i) generate noise data associated with multiple random variables; (in [0062] In an embodiment, the prediction engine (218) may include machine learning methodologies using Gaussian process. The Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution [generate noise data associated with multiple random variable], i.e., every finite linear combination of them is normally distributed…; And in [0088] During the training process as depicted below in FIG. 6A, GAN model is used for modelling variability of good working condition of gas wells. A GAN consists of adversarial engines, a generator G and a discriminator D. The generator G learns a distribution p.sub.g over data x via a mapping G(z) of samples z, 1D vectors of uniformly distributed input noise sampled from latent space [generate noise data associated with multiple random variables], to feature space. In this setting, the network architecture is a standard neural network decoder. Let length of the vector z be L. Here, Different values of L.sub.z will be explored and chose the one which results in best performance of the model. ) (ii) process the noise data, by a first neural network of a Generative Adversarial Network (GAN), to produce generated input data; (in [0088] During the training process as depicted below in FIG. 6A, GAN model is used for modelling variability of good working condition of gas wells. A GAN consists of adversarial engines, a generator G and a discriminator D. The generator G learns a distribution p.sub.g over data x via a mapping G(z) of samples z, 1D vectors of uniformly distributed input noise sampled from latent space, to feature space [process the noise data, by a first neural network of a Generative Adversarial Network (GAN), to produce generated input data]. In this setting, the network architecture is a standard neural network decoder. Let length of the vector z be L. Here, Different values of L.sub.z will be explored and chose the one which results in best performance of the model. [0089] Discriminator D is a neural network that maps a derived feature vector to single scalar value D(.). The discriminator output D(.) can be interpreted as probability that the given input to the discriminator D was a feature vector from training data belonging to good working condition of the well or generated G(z) by the generator G […to produce generated input data]. D and G are simultaneously optimized through the below two player minimax game with value function V(G, D).) (iii) process, by a second neural network of the GAN, the generated input data and the one or more training data sets to produce a loss value; (iv) modify the first neural network and the second neural network based on the loss value; (in [0091] The Generator and Discriminator networks are trained using back propagation of gradients of loss function w.r.t different parameters in Generator G and discriminator D network [(iii) process, by a second neural network of the GAN, the generated input data and the one or more training data sets to produce a loss value]. Generator and Discriminator weights are updated iteratively in training engine. In each iteration generator and discriminator weights are updated [iv) modify the first neural network and the second neural network based on the loss value]. While updating weights of a generator discriminator weights are kept constant and while updating the discriminator weights generator weights are kept constant. Number of iterations for which generator and discriminator weight updates happen is referred to as Number of epochs (N.sub.epoch).) repeat (i)-(iv) until the loss value reaches a convergence value resulting in a trained GAN including a trained first neural network; (in [0091] The Generator and Discriminator networks are trained using back propagation of gradients of loss function w.r.t different parameters in Generator G and discriminator D network [repeat (i)-(iv) until the loss value reaches a convergence value resulting in a trained GAN including a trained first neural network]. Generator and Discriminator weights are updated iteratively in training engine. In each iteration generator and discriminator weights are updated. While updating weights of a generator discriminator weights are kept constant and while updating the discriminator weights generator weights are kept constant. Number of iterations for which generator and discriminator weight updates happen is referred to as Number of epochs (N.sub.epoch) [repeat (i)-(iv) until the loss value reaches a convergence value resulting in a trained GAN including a trained first neural network].) generate evaluation change events for the multiple data categories using the trained first neural network, and based on the evaluation change events, produce generated change events; (As depicted in Fig8 and in [0112] FIG. 8 illustrates an exemplary representation system architecture of PCP fault prediction engine [generate evaluation change events for the multiple data categories using the trained first neural network, and based on the evaluation change events, produce generated change events], in accordance with an embodiment of the present disclosure.) filter the generated change events to identify extreme but plausible scenarios using a predetermined change measure with one or more predetermined thresholds; (in [0116] CBM GAN Model 816: The CBM GAN model may include of all the processes mentioned in the above sections. It may include a generator, discrimination, z-estimator and residual and discriminator loss calculations. [0117] CBM Prediction Module 818: The CBM prediction module 818 may include aggregator and threshold systems those predict the type of failure 820 and the chances of failure 832 [filter the generated change events to identify extreme but plausible scenarios using a predetermined change measure with one or more predetermined thresholds]. The type of failure may be determined based on a ranking of the probability of all the types of failures. The chances of failure may be a number between 0 and 1.0. A high value indicates that the chances of failure is high.) and provide information concerning the extreme but plausible scenarios to a user interface. (in [0071] The above system takes sensor input 502-1, 502-2 and 502-3 as raw data for data acquisition 504, cleaning 506 and labelling 510 and feed the feature generation 508 data to the GAN optimizer engine 512 processed in parallel to provide the output for the optimized failure prediction solution. The system comprises of scenario, dynamic data and metadata tables as input and generates optimized failure prediction solution (dashboards, plots and CSV files) as output for the stakeholders [and provide information concerning the extreme but plausible scenarios to a user interface] to analyze and take decisions. ; And in [0113] In an exemplary embodiment, a sample application solution of the above GMS Engine may involve the prediction of the failures in the Progressive Cavity Pump (PCP) used in Coal Bed Methane (CBM) wells for gas extraction... Following is a subset of the sensors (IoT UEs) used in measuring the parameters during the operation of the PCP: current sensor 804, torque sensor 802, tubing pressure sensor 806, annular flow rate sensor 808, rpm sensor 810, gas flow rate sensor 810 and the water flow rate sensor 810. The following system presents a detailed flow of the GMS engine to analyse the CBM failure prediction use-case. [0117] CBM Prediction Module 818: The CBM prediction module 818 may include aggregator and threshold systems those predict the type of failure 820 and the chances of failure 832. The type of failure may be determined based on a ranking of the probability of all the types of failures [provide information concerning the extreme but plausible scenarios to a user interface]. The chances of failure may be a number between 0 and 1.0. A high value indicates that the chances of failure is high…; And in [0053] In an embodiment, information related to failures may be accessed using the user equipment via set of instructions residing on any operating system, including but not limited to, Android™, iOS™, and the like [provide information concerning the extreme but plausible scenarios to a user interface]. In an embodiment, the one or more user equipment may be any smart computing devices and correspond to any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices… [0057] FIG. 2B with reference to FIG. 2A, illustrates an exemplary representation of modelling system 106/user equipment 102 for facilitating prediction of failures associated with gas extraction systems, in accordance with an embodiment of the present disclosure…) While Kumar teaches training a generative adversarial network (GAN) over multiple iterations that converges to produce a trained GAN after a set number of iterations using a loss value as noted above. Alternatively, Jain teaches a trained GAN is a result of a converge process based on a loss value, in [0061] The discriminator neural network 510 takes as input, the actual data (e.g., the training data set 530) as well as the synthetic time series data set 525 from the generator neural network 505 labelled as “real” and “fake,” respectively and learns to distinguish “real” from “fake.”... The generator neural network 505 may then optimize its weights to minimize this loss value [repeat (i)-(iv) until the loss value reaches a convergence value resulting in a trained GAN including a trained first neural network] which leads it to creating a better synthetic time series data set 525 that can fool the discriminator neural network 510 into predicting it as real. At the same time the discriminator neural network 510 is trying to maximize its probability of correctly predicting the real and fake labels. Both the models are trained alternatively, and progress at such a pace that no one model should get better than the other to maintain the competition. The model training can be said to have converged [repeat (i)-(iv) until the loss value reaches a convergence value resulting in a trained GAN including a trained first neural network] once the generator neural network 505 is producing high quality data and the discriminator neural network 510 is not able to confidently distinguish “real” from “fake” or when some other condition is satisfied (e.g., within a statistical threshold of similarity between the fake and the real is achieved). Jain and Kumar are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for retrieving information and developing processing time series data using generator neural networks as disclosed by Jain with the method of developing information retrieval and processing techniques using generator neural networks for predicting failures as disclosed by Kumar. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Jain and Kumar noted above; Doing so allows for improving data object analysis and machine learning predictions using data objects that include time series data entries processed using generator neural networks, (Jain, 0047 & Abstract). Regarding claim 2 the rejection of claim 1 is incorporated and Kumar in combination with Jain teaches the apparatus in claim 1, wherein the one or more hardware processors and the one or more memories are configured to generate at least one validation data set for hyperparameter training of the first neural network and the second neural network (in [0091] The Generator and Discriminator networks are trained using back propagation of gradients [generate at least one validation data set for hyperparameter training] of loss function w.r.t different parameters [hyperparameter training] in Generator G and discriminator D network. Generator and Discriminator weights are updated iteratively in training engine [wherein the one or more hardware processors and the one or more memories are configured to generate at least one validation data set for hyperparameter training of the first neural network and the second neural network]. In each iteration generator and discriminator weights are updated. While updating weights of a generator discriminator weights are kept constant and while updating the discriminator weights generator weights are kept constant. Number of iterations for which generator and discriminator weight updates happen is referred to as Number of epochs (N.sub.epoch).) and at least one test data set for testing the trained first neural network. (in [0090] The discriminator is trained to maximize the probability of assigning good working condition training examples the “good” and samples from p.sub.g the “failure” label. The generator is simultaneously trained to fool D via minimizing V(G)=log(1−D(G(z))) which is equivalent to maximizing V(G)=D(G(z)) [at least one test data set for testing the trained first neural network]. During adversarial training the generator improves in generating derived features in good condition and the discriminator progresses in correctly identifying good and not good features. ) Regarding claim 3, the rejection of claim 2 is incorporated and Kumar in combination with Jain teaches the apparatus in claim 2, the one or more hardware processors and the one or more memories are configured to evaluate the trained first neural network using the at least one test data set to determine whether the trained first neural network is generating plausible events. (in [0090] The discriminator is trained to maximize the probability of assigning good working condition training examples the “good” and samples from p.sub.g the “failure” label. The generator is simultaneously trained to fool D [the one or more hardware processors and the one or more memories are configured to evaluate the trained first neural network using the at least one test data set to determine whether the trained first neural network is generating plausible events] via minimizing V(G)=log(1−D(G(z))) which is equivalent to maximizing V(G)=D(G(z)). During adversarial training the generator improves in generating derived features in good condition and the discriminator progresses in correctly identifying good and not good features.) Regarding claim 5 the rejection of claim 1 is incorporated and Kumar in combination with Jain teaches the apparatus in claim 1, wherein the noise is multivariate noise for multiple data categories being monitored. (in [0061] In an embodiment, the GAN engine 216 may include machine learning techniques where given a training set, this technique learns to generate new data with the same statistics as the training set… Though originally proposed as a form of generative model for supervised learning, GANs have also proven useful for semi-supervised learning, unsupervised learning, and reinforcement learning. In an exemplary embodiment, the GAN enginecan be configured to analyse each set of data packets received from the sensors […for multiple data categories being monitored]. [0062] In an embodiment, the prediction engine (218) [wherein the noise is multivariate noise for multiple data categories being monitored] may include machine learning methodologies using Gaussian process. The Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution [herein the noise is multivariate noise for multiple data categories being monitored], i.e., every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g., time or space…) Regarding claim 10 the rejection of claim 1 is incorporated and Kumar in combination with Jain teaches the apparatus in claim 1, wherein the one or more hardware processors and the one or more memories are configured to perform operations including pre-processing the training data to produce pre-processed training data for the training data set. (As depicted in Fig. 5A and Fig. 8; And in [0070] FIG. 5A illustrates an exemplary representation system architecture of General Adaptive Network (GAN) Engine, in accordance with an embodiment of the present disclosure. [0071] The above system takes sensor input 502-1, 502-2 and 502-3 as raw data for data acquisition 504, cleaning 506 [wherein the one or more hardware processors and the one or more memories are configured to perform operations including pre-processing the training data to produce pre-processed training data for the training data set] and labelling 510 and feed the feature generation 508 data to the GAN optimizer engine 512 processed in parallel to provide the output for the optimized failure prediction solution. The system comprises of scenario, dynamic data and metadata tables as input and generates optimized failure prediction solution (dashboards, plots and CSV files) as output for the stakeholders to analyze and take decisions. And in [0115] CBM Time Series Data Processing module 814: The data processing step encompasses most of processing of raw data into a model-consumable form. It involves filling missing values, reduction of noise, cleaning data [wherein the one or more hardware processors and the one or more memories are configured to perform operations including pre-processing the training data to produce pre-processed training data for the training data set] in terms of improving the quality and then synchronising the data to bring a temporal consistency. ) Regarding claims 11 and 17, the limitations are similar to claim 1 and are thus rejected under the same rationale. Regarding claim 12 the rejection of claim 11 is incorporated and Kumar in combination with Jain teaches the method in claim 11, wherein the (iv) modifying step includes updating parameters associated with the first neural network and the second neural network of the GAN based on the loss value. (in [0091] The Generator and Discriminator networks are trained using back propagation of gradients [wherein the (iv) modifying step includes updating parameters associated with the first neural network and the second neural network of the GAN based on the loss value] of loss function w.r.t different parameters [wherein the (iv) modifying step includes updating parameters associated with the first neural network and the second neural network of the GAN based on the loss value] in Generator G and discriminator D network. Generator and Discriminator weights are updated iteratively in training engine In each iteration generator and discriminator weights are updated. While updating weights of a generator discriminator weights are kept constant and while updating the discriminator weights generator weights are kept constant. Number of iterations for which generator and discriminator weight updates happen is referred to as Number of epochs (N.sub.epoch).) Regarding claims 14 and 15, the limitations are similar to the ones in claims 2-3 and are thus rejected under the same rationale. Claims 1, 8-9, 13, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lin et al. (US 20200265032, hereinafter ‘Lin’) in view of Kumar et al. (US 20230359193, hereinafter ‘Kumar’). Regarding independent claim 1, Lin teaches an apparatus including a generative neural network for generating extreme but plausible scenarios for a system with multiple data categories, the apparatus comprising: one or more hardware processors; one or more memories in communication with the one or more hardware processors; wherein: the one or more hardware processors and the one or more memories are configured to: (in [0124] As previously noted, whenever it is described in this document that a software module or software process performs any action, the action is in actuality performed by underlying hardware elements according to the instructions that comprise the software module… (b) alternatively or additionally, to the extent it is described herein that one or more software modules exist within the component, in some embodiments, such software modules (as well as any data described herein as handled and/or used by the software modules) are stored in the memory devices 704 (e.g., in various embodiments, in a volatile memory device such as a RAM or an instruction register and/or in a non-volatile memory device such as a flash memory or hard disk) and all actions described herein as performed by the software modules are performed by the processors 702 in conjunction with, as appropriate, the other elements in and/or connected to the computing device 700 (i.e., the network interface devices 706, display interfaces 708, user input adapters 710, and/or display device 712); (c) alternatively or additionally, to the extent it is described herein that the component processes and/or otherwise handles data, in some embodiments, such data is stored in the memory devices 704 (e.g., in some embodiments, in a volatile memory device such as a RAM and/or in a non-volatile memory device such as a flash memory or hard disk) and/or is processed/handled by the processors 702 in conjunction, as appropriate, the other elements in and/or connected to the computing device 700 (i.e., the network interface devices 706, display interfaces 708, user input adapters 710, and/or display device 512); (d) alternatively or additionally, in some embodiments, the memory devices 702 store instructions that, when executed by the processors 702, cause the processors 702 to perform, in conjunction with, as appropriate, the other elements in and/or connected to the computing device 700 (i.e., the memory devices 704, network interface devices 706, display interfaces 708, user input adapters 710, and/or display device 512), each or any combination of actions described herein as performed by the component and/or by any software modules described herein as included within the component.) determine evaluation change events for the multiple data categories for repeated time intervals over a time period to produce training data; (in [0027] Datasets are provided to system 100 from data sources 102 [determine evaluation change events for the multiple data categories for repeated time intervals over a time period to produce training data]. Data sources 102 can be computing devices (e.g., 700) that are remotely located from system 100… For example, a company may post or make available data (e.g., in the form of a text file, a database file, a character delimited file, email, etc. . . . ) on a daily, weekly, monthly, or quarterly basis [determine evaluation change events for the multiple data categories for repeated time intervals over a time period to produce training data]. System 100 may then use that data as a dataset (or part of a dataset) for processing… [0028] ... The techniques discussed herein may provide a determination as to if the processing associated with, for example, a weather prediction service or system (or other type of service or system) has changed (e.g., over time) … [0029] Three non-limiting illustrative examples of different types of datasets are shown in FIGS. 3A-3C.... [0032] Naturally, the above are just examples of the types of data that may be assessed using the techniques described herein… [0039] Returning to FIG. 1, once the data is separated by the separator module 106, the paired subgroups (which are also datasets themselves) are passed to one or more (usually multiple) different detector modules 108a-108c. Note that while three detector modules are shown in FIG. 1, any number of detector modules may be used. For example, 5 or 10 detector modules may be used by the system 100 when datasets are analyzed [determine evaluation change events for the multiple data categories for repeated time intervals over a time period to produce training data]. [0040] The detector module(s) may be programmed to determine (or provide metrics that are used to determine) how different/similar the two subgroups of data within a given dataset are. Each different type of detector may be used to glean or determine different aspects of how the subgroups are different (or not different) from one another. Each of the detector modules may be a separate computer process (e.g., a software application) that takes the provided datasets as input (e.g., from the separator module 102) and outputs results of the processing performed by the detector.) determine one or more training data sets based on the training data; (in [0058] GAN detector 400 in FIG. 4A includes a training module 402 and a detection module 404. Training module 402 a detection module 404 are software processes, but may also be implemented in hardware (e.g., FPGAs, ASICs, etc.) As previously mentioned, separator module 106 may be designed to split a dataset into two different subgroups or sub-datasets [determine one or more training data sets based on the training data]. These subgroups are represented in FIG. 4 as dataset 1 (406) and dataset 2 (416). So in FIG. 4A, the separator 106 passes dataset 406 to the training module 402 and dataset 416 to the detection module 404 for processing.) (i) generate noise data associated with multiple random variables; (ii) process the noise data, by a first neural network of a Generative Adversarial Network (GAN), to produce generated input data; (in [0057] A generative adversarial network (GAN) training strategy sets up a game between two competing (e.g., neural) networks. A first network (a generator network) combines a source of noise to/with an input dataset (e.g., an input space) to produce a synthetic dataset. A second network (a discriminator network) then receives true data and the output from the generator and distinguishes between the two. Further discussion of GANs may be found in Improved Training of Wasserstein GANs from Gulrajani et al, December 2017, the entire contents being hereby incorporated by reference...; And in [0064] Generator network 412 also receives conditional data 408 and noise 410 [generate noise data associated with multiple random variables] to produce synthetic dataset 424 [(ii) process the noise data, by a first neural network of a Generative Adversarial Network (GAN), to produce generated input data] and synthetic conditional data 426. Both synthetic dataset 424 and synthetic conditional data 426 are passed to the discriminator network 414.) (iii) process, by a second neural network of the GAN, the generated input data and the one or more training data sets to produce a loss value; (in [0066] Based on the results of discriminating over the synthetic dataset 424 and/or dataset 406, the discriminator network 414 will feedback critic data (e.g., critical loss) [process, by a second neural network of the GAN, the generated input data and the one or more training data sets to produce a loss value] to the generator network 412. This data is used to inform the generator network 412 on the quality of the generated synthetic data. Such data may include how close the synthetic data is to the real data and which “direction” the generator network 412 should go for future synthetic datasets (e.g., such information may assist in training the generator network 412)… [0070] In certain example embodiments, once the discriminator network is trained (420) (e.g., it has converged), then a benchmark critic loss distribution may generated. The benchmark critic loss distribution may be determined by passing two datasets (e.g., x.sub.1 and x.sub.2, which may be subsets within dataset 406) and any corresponding conditional information to the trained discriminator network 420 to calculate the critic loss for each dataset. In general, the two datasets passed into the trained discriminator network 420 may be sample data from the same overarching dataset… [0071] An expected distribution spread for f should be around zero (e.g., if the discriminator network has been well trained and x.sub.1 and x.sub.2 are from the same dataset). Conversely, a distribution spread that is not zero may indicate that x.sub.1 and x.sub.2 are not from the same dataset… ) (iv) modify the first neural network and the second neural network based on the loss value; repeat (i)-(iv) until the loss value reaches a convergence value resulting in a trained GAN including a trained first neural network; (in [0057] A generative adversarial network (GAN) training strategy sets up a game between two competing (e.g., neural) networks. A first network (a generator network) combines a source of noise to/with an input dataset (e.g., an input space) to produce a synthetic dataset. A second network (a discriminator network) then receives true data and the output from the generator and distinguishes between the two. Further discussion of GANs may be found in Improved Training of Wasserstein GANs from Gulrajani et al, December 2017, the entire contents being hereby incorporated by reference… [0062] An optimization function called “gradient descent” (call gradient penalty in FIG. 4) can be used to adjust weights according to the error they caused until the error cannot be reduced any more or reaches a threshold value. The neural network converges when it has reached that threshold error, and at that convergence point, the neural network is “trained” (e.g., thus producing discriminator network 420 or other “trained” networks).) generate evaluation change events for the multiple data categories using the trained first neural network, and based on the evaluation change events, produce generated change events; (in [0021] The system includes a separator module (also called a separator), one or more detector modules (also called detectors), and a evaluating module (also called an evaluator). The separator module splits datasets into subgroups (e.g., sub datasets) and then feeds the subgroups to the various detectors of the system. Each detector (there are usually multiple different types of detectors for a given system) then evaluates (e.g., separately) a level of difference between the paired subgroups that have been supplied to the respective detector and outputs one or more metrics that are based on the difference. Different types of detector modules may include, for example, a distribution shift detector, a structural change detector, a time series characteristics detector, a detector that uses generative adversarial networks (GAN) or a GAN detector, and others. The outputted metrics [generate evaluation change events for the multiple data categories using the trained first neural network, and based on the evaluation change events, produce generated change events] from the detectors are provided to the evaluator that may then generate, for example, a pass/fail determination (e.g., a binary decision) or a probabilistic determination [and based on the evaluation change events, produce generated change events] that relates to the data source(s) that produced the datasets.) filter the generated change events to identify extreme but plausible scenarios using a predetermined change measure with one or more predetermined thresholds; and provide information concerning the extreme but plausible scenarios . (in [0105] Once the discriminator network 414 has converged then the GAN detector moves to detection and metric generation. For this processing a benchmark critic loss distribution is generated by using two sample sets of real daily returns from dataset A (x1 and x2) and corresponding conditional data. A test data critic loss distribution is also generated by using x1 of the real daily returns from dataset A and dataset B (along with corresponding conditional data). The distance between the benchmark critic loss distribution and the test data critic loss distribution is then calculated and passed to the evaluator module 110. The evaluator module 110 may then compare the received value that is (or is based on) the calculated distance to a threshold value to determine if x1 and X3 are for the same stock [filter the generated change events to identify extreme but plausible scenarios using a predetermined change measure with one or more predetermined thresholds]. In this example, the calculated distance is 0.6 (see FIG. 4B) which is greater than a threshold of 0.1. Accordingly, the evaluator module 110 determines that x1 and x3 are not the same. [0106] The determination may be passed on to monitoring service 112 that may then issue an alert [and provide information concerning the extreme but plausible scenarios to a user interface], log the determination to a database, or other additional actions [and provide information concerning the extreme but plausible scenarios to a user interface].…; And in [0053] Additional processing may be handled by the monitoring service module 112. In certain example embodiments, the monitoring service module 112 may be programmed to issue alerts [provide information concerning the extreme but plausible scenarios to a user interface] (e.g., via e-mail, text, etc. . . . ) when a dataset is determined to be anomalous or otherwise includes changes that are statistically meaningful or significant...) Lin does not expressly recite the use of random variables. Kumar does expressly teach the use of random variables, in [0062] In an embodiment, the prediction engine (218) may include machine learning methodologies using Gaussian process. The Gaussian process is a stochastic process (a collection of random variables indexed by time or space) [generate noise data associated with multiple random variables], such that every finite collection of those random variables has a multivariate normal distribution, i.e., every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g., time or space. A machine-learning algorithm that involves a Gaussian process uses lazy learning and a measure of the similarity between points (the kernel function) to predict the value for an unseen point from training data… Kumar also expressly teaches that use of a generative adversarial network (GAN) for processing time series data as depicted in Fig.5 and [0071] The above system takes sensor input 502-1, 502-2 and 502-3 as raw data for data acquisition 504 [generative neural network for generating extreme but plausible scenarios for a system with multiple data categories], cleaning 506 and labelling 510 and feed the feature generation 508 data to the GAN optimizer engine 512 processed in parallel to provide the output for the optimized failure prediction solution… [0072] The above system has broadly following steps— [0073] Feature extraction from time series of different sensors [0074] Data labelling process [0075] Training of GAN Models [0076] Inference pipeline… [0085] In an embodiment, for each gas well, workover start date (ws.sup.start) and workover end date (ws.sup.end) are available as a csv file which are used to mark each observation computed in feature extraction process. All observations between workover start date and workover end date are marked as failure condition data. Also, a window (W days) of observations before workover start date are also marked as failure in order to enable ahead failure prediction. All other observations outside of the window (ws.sup.start−W, ws.sup.end) are marked as observations belonging to good condition data. After labelling process we have data in the format {x.sup.t, y.sup.t}, where y.sup.t takes value either good or failure [determine evaluation change events for the multiple data categories for repeated time intervals over a time period to produce training data]. Kumar and Lin are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information retrieval and processing techniques using generator neural networks for predicting failures as disclosed by Kumar with the method of analyzing such datasets to determine whether there have been any changes or alternations using generator neural networks as disclosed by Lin. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Kumar and Lin noted above; Doing so helps improve data quality due to labelling and modeling complex distributions of feature vectors accurately to further developing automated techniques for finding deviations from normal data distribution used for detecting failures, (Kumar, Abstract). Regarding claim 8 the rejection of claim 1 is incorporated and Lin in combination with Kumar teaches the apparatus in claim 1, wherein modifying the first neural network and the second neural network based on the difference includes updating weights and biases associated with the first neural network and the second neural network based on the loss value. (in [0057] A generative adversarial network (GAN) training strategy [wherein modifying the first neural network and the second neural network based on the difference includes updating weights and biases associated with the first neural network and the second neural network based on the loss value.] sets up a game between two competing (e.g., neural) networks. A first network (a generator network) [modifying the first neural network and the second neural network based on the difference includes updating weights and biases associated with the first neural network] combines a source of noise to/with an input dataset (e.g., an input space) to produce a synthetic dataset. A second network (a discriminator network) [the second neural network based on the loss value] then receives true data and the output from the generator and distinguishes between the two. Further discussion of GANs may be found in Improved Training of Wasserstein GANs from Gulrajani et al, December 2017, the entire contents being hereby incorporated by reference. [0059] In certain example embodiments, the networks discussed herein (e.g., generator network 412, discriminator network 414/420, etc. . . . ) may be neural networks. Neural networks may group input data sets according to similarities among the input data sets by learning to approximate an unknown function between any input and any output. In the process of learning, the neural network may find a function that transforms the input into the output. Neural networks include processing nodes and each processing node in a layer of nodes in the neural network combines input data sets with a set of coefficients, or weights [modifying the first neural network and the second neural network based on the difference includes updating weights and biases associated with the first neural network], that either increase or decrease that input, thereby assigning significance to input data sets for the target metric the neural network is trying to learn. These input-weight products or weighted [updating weights] input datasets may then be, for example, summed, and the sum [biases associated with the first neural network] is passed through a processing node's activation function to determine whether, and/or to what extent, that sum signal progresses further through the network to affect the ultimate neural network output. [0060] When training the neural network, each node layer learns automatically by repeatedly trying to produce or reconstruct a target metric. Each training iteration produces an error measurement or “loss” [second neural network based on the loss value] (e.g., the critic loss that is passed back to the generator network 412) between the weighted input and the target metric, and the error is used to adjust the weights to the extent they contributed to the error. A collection of weights, whether at the start or end state of training, is also called a model. A neural network can be viewed as a corrective feedback loop, rewarding (increasing) weights that reduce error and punishing (decreasing) weights that increase error.) Additionally, Kumar teaches wherein modifying the first neural network and the second neural network based on the difference includes updating weights and biases associated with the first neural network and the second neural network based on the loss value, in [0091] The Generator and Discriminator networks are trained [wherein modifying the first neural network and the second neural network based on the difference includes updating weights and biases associated with the first neural network] using back propagation of gradients of loss function [… the second neural network based on the loss value] w.r.t different parameters in Generator G and discriminator D network. Generator and Discriminator weights are updated iteratively in training engine [wherein modifying the first neural network and the second neural network based on the difference includes updating weights and biases associated with the first neural network]. In each iteration generator and discriminator weights are updated. While updating weights of a generator discriminator weights [updating weights and biases associated with the first neural network] are kept constant and while updating the discriminator weights generator weights are kept constant. Number of iterations for which generator and discriminator weight updates happen is referred to as Number of epochs (N.sub.epoch). [0093] ZEstimator: New Feature to Latent Space: When adversarial training is completed, the generator has learned the mapping G(z) [updating … biases associated with the first neural network] from latent space representations z to feature space of good working condition x of the CBM well. But, GANs do not automatically provide inverse mapping μ(x) from feature space to latent space. The latent space has smooth transitions, so sampling two points close in the latent space generates two similar derived features. Given a query feature x, a point z, in the latent space that corresponds to feature G(z) that is similar to query feature vector x. To find the best z, z.sub.1 is randomly sampled from the latent space distribution and fed into the generator to get a generated derived feature vector G(z.sub.1). Based on the generated derived feature vector G(z.sub.1) a loss function is defined, which provides gradients for the update of coefficients of z.sub.1 resulting in an updated position in latent space z.sub.2. In order to find the most similar image G(z.sub.Γ), the location of z in the latent space is optimized in an iterative process via γ=1, 2, 3, . . . , Γ back propagation steps. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Lin and Kumar for the same reasons disclosed above. Regarding claim 9 the rejection of claim 1 is incorporated and Lin in combination with Kumar teaches the apparatus in claim 1, wherein the convergence value corresponds to an equilibrium being reached between the first neural network and the second neural network. (in [0062] An optimization function called “gradient descent” (call gradient penalty in FIG. 4) can be used to adjust weights according to the error they caused until the error cannot be reduced any more or reaches a threshold value. The neural network converges when it has reached that threshold error, and at that convergence point, the neural network is “trained” (e.g., thus producing discriminator network 420 or other “trained” networks) [wherein the convergence value corresponds to an equilibrium being reached between the first neural network and the second neural network]…; And in [0105] Once the discriminator network 414 has converged then the GAN detector moves to detection and metric generation [wherein the convergence value corresponds to an equilibrium being reached between the first neural network and the second neural network]. For this processing a benchmark critic loss distribution is generated by using two sample sets of real daily returns from dataset A (x1 and x2) and corresponding conditional data…) Regarding claims 11 and 17, the limitations are similar to claim 1 and are thus rejected under the same rationale. Regarding claim 13, the limitations are similar with claim 9 and are thus rejected under the same rationale. Claims 1, 9, 11, 13, 17 and 19 rejected under 35 U.S.C. 103 as being unpatentable over Jain et al. (US 20230334299, hereinafter ‘Jain’) in view of Munoz Delgado (US 20210019572, hereinafter ‘Mun’). Regarding independent claim 1, Jain teaches an apparatus including a generative neural network for generating extreme but plausible scenarios for a system with multiple data categories, the apparatus comprising: one or more hardware processors; one or more memories in communication with the one or more hardware processors; wherein: the one or more hardware processors and the one or more memories are configured to: (in [0080] FIG. 13 is a diagram that illustrates an exemplary computing system 1300 in accordance with embodiments of the present technique… Further, processes and modules described herein may be executed by one or more processing systems similar to that of computing system 1300… [0081] Computing system 1300 may include one or more processors (e.g., processors 1310a-1310n) coupled to system memory 1320, an input/output I/O device interface 1330, and a network interface 1340 via an input/output (I/O) interface 1350. A processor may include a single processor or a plurality of processors (e.g., distributed processors). A processor may be any suitable processor capable of executing or otherwise performing instructions. A processor may include a central processing unit (CPU) that carries out program instructions to perform the arithmetical, logical, and input/output operations of computing system 1300. A processor may execute code (e.g., processor firmware, a protocol stack, a database management system, an operating system, or a combination thereof) that creates an execution environment for program instructions. A processor may include a programmable processor. A processor may include general or special purpose microprocessors. A processor may receive instructions and data from a memory (e.g., system memory 1320)...) determine evaluation change events for the multiple data categories for repeated time intervals over a time period to produce training data; (As depicted in Fig. 4 and in [0051] FIG. 4 illustrates an example workflow of a time series simulation model 400 of the present disclosure. As illustrated, the time series simulation model controller 205 may have obtained the data object 405 that may be a data object 212 of FIG. 2. The data object 405 may include a data set 405a having a first weight, a data set 405b having a second weight, a data set 405c having a third weight, and up to a data set 405n having an nth weight [determine evaluation change events for the multiple data categories]. However, one of skill in the art will recognize that fewer or more data sets may be included in the data object 405. A data set of the data sets 405a-405n may include time series data sets [for the multiple data categories for repeated time intervals over a time period to produce training data] and may include a label for that data set [determine evaluation change events for the multiple data categories]. Each data set 405a-405n may be related to a property (e.g., a property 445) for which the time series simulation model 400 is attempting to predict.) determine one or more training data sets based on the training data; (in [0056] … For example, the time series simulation model controller 205 may include a data set selector 410 [determine one or more training data sets based on the training data], as illustrated in FIG. 4, that is configured to select a data set included in the data object. In an example, the data set selector 410 may include a random number generator that generates a number that is substantially random where the number generated is an integer that is between “1” and the number of data sets included in the data object. However, in other embodiments, the data set selector 410 may select the first data set in the data object [determine one or more training data sets based on the training data], the last data set in the data object, the data set with the greatest weight, the data set with the least weight, or may select the data set according to any other criteria that would be apparent to one of skill in the art in possession of the present disclosure. The selected time series data set may be associated with a label. The label may identify or otherwise describe the time series data set. The label may include a machine learning label such that the label may be used in machine learning algorithms. ) (i) generate noise data associated with multiple random variables; (ii) process the noise data, by a first neural network of a Generative Adversarial Network (GAN), to produce generated input data; (in [0059] As illustrated, in FIG. 4. the series of generated values may be provided by the synthetic data generator 415 along with the label for the selected to a base GAN 420. Also, as illustrated, the time series simulation model 400 may include a coupled GAN 430 as indicated above. The base GAN 420 and the coupled GAN 430 are trained for each time series data set of the plurality of time series data sets 405a-405n. In various embodiments, the base GAN 420 may be trained by the time series simulation model training controller 204. [0060] For example, FIG. 5 illustrates a training workflow of a base GAN 500 that may be provided by the base GAN 420. The base GAN 500 may include a generator neural network 505 and a discriminator neural network 510. As would be appreciated by one of skill in the art in possession of the present disclosure, GANs are highly adaptive and can be trained to learn several data distributions and generate its synthetic counterpart, which can then be used in downstream applications. A basic conventional GAN architecture includes two neural networks (e.g., the generator neural network 505 [a first neural network of a Generative Adversarial Network (GAN),] and the discriminator neural network 510). The base GAN 500 may include generator neural network 505 that takes random noise 515 [generate noise data associated with multiple random variables] (e.g., the series of generated values) and a label 520 of a training data set as inputs and learns to generate outputs [process the noise data, by a first neural network of a Generative Adversarial Network (GAN), to produce generated input data] (e.g., a synthetic time series data set 525) that aim to resemble the actual data set (e.g., the training data set 530) associated with the label 520 without seeing the training data set 530 that is associated with the label 520.) (iii) process, by a second neural network of the GAN, the generated input data and the one or more training data sets to produce a loss value; (in [0060] For example, FIG. 5 illustrates a training workflow of a base GAN 500 that may be provided by the base GAN 420. The base GAN 500 may include a generator neural network 505 and a discriminator neural network 510. As would be appreciated by one of skill in the art in possession of the present disclosure, GANs are highly adaptive and can be trained to learn several data distributions and generate its synthetic counterpart, which can then be used in downstream applications. A basic conventional GAN architecture includes two neural networks (e.g., the generator neural network 505 and the discriminator neural network 510 [process, by a second neural network of the GAN]). The base GAN 500 may include generator neural network 505 that takes random noise 515 (e.g., the series of generated values) and a label 520 of a training data set as inputs and learns to generate outputs (e.g., a synthetic time series data set 525) that aim to resemble the actual data set (e.g., the training data set 530) associated with the label 520 without seeing the training data set 530 that is associated with the label 520. [0061] The discriminator neural network 510 takes as input, the actual data (e.g., the training data set 530) as well as the synthetic time series data set 525 from the generator neural network 505 labelled as “real” and “fake,” respectively and learns to distinguish “real” from “fake.” In some embodiments, the “real” label and the “real” training data set 530 may alternate with the “fake” label and the synthetic time series data set 525 when provided to the discriminator neural network 510. Periodically (e.g., 10% of the time or some other percentage), the training data set 530 may be labeled with the “fake” label and the synthetic time series data set 525 may be a labeled with the “real.” The discriminator neural network 510 also receives the label 520 associated with the training data set 530. The feedback on the synthetic time series data set 525 from the discriminator neural network 510 gets passed on to the generator neural network 505 as a loss value [produce a loss value]. The generator neural network 505 may then optimize its weights to minimize this loss value [process, by a second neural network of the GAN, the generated input data and the one or more training data sets to produce a loss value] which leads it to creating a better synthetic time series data set 525 that can fool the discriminator neural network 510 into predicting it as real…) (iv) modify the first neural network and the second neural network based on the loss value; repeat (i)-(iv) until the loss value reaches a convergence value resulting in a trained GAN including a trained first neural network; (in [0063] While the coupled GAN is discussed in more detail below, the coupled GAN 430 may be trained by the time series simulation model training controller 204 as well. FIG. 6 illustrates a training workflow of a coupled GAN 600 that may be provided by the coupled GAN 430. In various embodiments, the coupled GAN 600 may be trained similar to the base GAN 500... The generator neural network 605 generates a synthetic time series data set 635. The discriminator neural network 610 may receive the synthetic time series data set 635 as well as a training time series data set 640 that may include the actual time series data set (as used herein actual may include real data measured from the associated environment or it may include another synthetic data set that is being analyzed). The discriminator neural network 610 may also receive the label 620, the label 625, and the time series data set 630. The feedback on the synthetic time series data set 635 from the discriminator neural network 610 gets passed on to the generator neural network 605 as a loss value. The generator neural network 605 may then optimize its weights to minimize this loss value [(iv) modify the first neural network and the second neural network based on the loss value; repeat (i)-(iv) until the loss value reaches a convergence value resulting in a trained GAN including a trained first neural network;] which leads it to creating a better synthetic time series data set 635 that can fool the discriminator neural network 610 into predicting it as real. At the same time, the discriminator neural network 610 is trying to maximize its probability of correctly predicting the “real” and “fake” labels and learns to distinguish “real” from “fake”…) generate evaluation change events for the multiple data categories using the trained first neural network, and based on the evaluation change events, produce generated change events; (in 0060] For example, FIG. 5 illustrates a training workflow of a base GAN 500 that may be provided by the base GAN 420. The base GAN 500 may include a generator neural network 505 and a discriminator neural network 510. As would be appreciated by one of skill in the art in possession of the present disclosure, GANs are highly adaptive and can be trained to learn several data distributions and generate its synthetic counterpart, which can then be used in downstream applications. A basic conventional GAN architecture includes two neural networks (e.g., the generator neural network 505 and the discriminator neural network 510). The base GAN 500 may include generator neural network 505 that takes random noise 515 (e.g., the series of generated values) and a label 520 of a training data set as inputs and learns to generate outputs [generate evaluation change events for the multiple data categories using the trained first neural network, and based on the evaluation change events, produce generated change events] (e.g., a synthetic time series data set 525) that aim to resemble the actual data set (e.g., the training data set 530) associated with the label 520 [produce generated change events] without seeing the training data set 530 that is associated with the label 520.) filter the generated change events to identify extreme but plausible scenarios using a predetermined change measure with one or more predetermined thresholds; (in [0061] The discriminator neural network 510 takes as input, the actual data (e.g., the training data set 530) as well as the synthetic time series data set 525 from the generator neural network 505 labelled as “real” and “fake,” respectively and learns to distinguish “real” from “fake.”[ filter the generated change events to identify extreme but plausible scenarios] In some embodiments, the “real” label and the “real” training data set 530 may alternate with the “fake” label and the synthetic time series data set 525 when provided to the discriminator neural network 510… At the same time the discriminator neural network 510 is trying to maximize its probability of correctly predicting the real and fake labels. Both the models are trained alternatively, and progress at such a pace that no one model should get better than the other to maintain the competition. The model training can be said to have converged once the generator neural network 505 is producing high quality data and the discriminator neural network 510 is not able to confidently distinguish “real” from “fake” [filter the generated change events to identify extreme but plausible scenarios] or when some other condition is satisfied (e.g., within a statistical threshold of similarity between the fake and the real is achieved) [using a predetermined change measure with one or more predetermined thresholds].) and provide information concerning the extreme but plausible scenarios . (in [0063] ... As illustrated in FIG. 6, the coupled GAN 600 may include a generator neural network 605 and a discriminator neural network 610… The discriminator neural network 610 may also receive the label 620, the label 625, and the time series data set 630. The feedback on the synthetic time series data set 635 from the discriminator neural network 610 gets passed on to the generator neural network 605 as a loss value. The generator neural network 605 may then optimize its weights to minimize this loss value which leads it to creating a better synthetic time series data set 635 that can fool the discriminator neural network 610 into predicting it as real [and provide information concerning the extreme but plausible scenarios ]…) While Jain teaches training a generative adversarial network to generate extreme scenarios as processed data to fool a discriminator network such that the discriminator network can provide predictive information concerning the processed data. Jain does not expressly teach providing the discriminator network information to a user interface. Mun expressly teaches providing the discriminator network information to a user interface, in [0046] As an optional component, the system 100 may comprise a display output interface 180 or any other type of output interface for outputting one or more discriminator instances to a rendering device, such as a display 180. For example, the display output interface 180 may generate display data 182 for the display 190 which causes the display 190 to render the one or more discriminator instances [provide information concerning the extreme but plausible scenarios to a user interface] in a sensory perceptible manner, e.g., as an on-screen visualisation 192… [0063] So far, it has been discussed how the discriminative model DM and generative model GM may be applied. The resulting outputs of the models can be put to various uses. For example, considering the discriminator scores 364 together as a discriminator instance, one or more such discriminator instances may be output in a sensory-perceptible manner to a user, e.g., when applying a convolutional neural network DM to an input image II, the output DS may itself be regarded as a discriminator image that can be output, e.g., shown on a screen. Such an output may be useful as a debugging output to let a user control the training of generative model GM, but also more generally, e.g., for anomaly detection [provide information concerning the extreme but plausible scenarios to a user interface] as discussed in D. Li et al., “MAD-GAN:.. Mun and Jain are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information retrieval and processing techniques using generator neural networks for applying machine learning models to sensor data, as disclosed by Mun with the method of retrieving information and developing processing time series data using generator neural networks as disclosed by Jain. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Mun and Jain noted above; Doing so helps improve the training of the machine learning model and obtain more accurate model outputs, e.g., classifications, (Mun, 0029). Regarding claim 9, the rejection of claim 1 is incorporated and Jain in combination with Mun teaches the apparatus in claim 1, wherein the convergence value corresponds to an equilibrium being reached between the first neural network and the second neural network. (in [0061] …on. The model training can be said to have converged once the generator neural network 505 is producing high quality data and the discriminator neural network 510 is not able to confidently distinguish “real” from “fake” [wherein the convergence value corresponds to an equilibrium being reached between the first neural network and the second neural network.] or when some other condition is satisfied (e.g., within a statistical threshold of similarity between the fake and the real is achieved).) Regarding claims 11 and 17, the limitations are similar to claim 1 and are thus rejected under the same rationale. Regarding claim 13, the limitations are similar with claim 9 and are thus rejected under the same rationale. Regarding claim 19, the limitations are similar with claim 9 and are thus rejected under the same rationale. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (US 20230359193, hereinafter ‘Kumar’) in view of Jain et al. (US 20230334299, hereinafter ‘Jain’) in further view of Besenbruch et al. (US 20230154055, hereinafter ‘Bes’) Regarding claim 4, the rejection of claim 1 is incorporated and Kumar in combination with Jain teaches the apparatus in claim 1, wherein a dimension of the noise corresponds to one of a Student's t-probability distribution or a normal probability distribution. (in [0062] In an embodiment, the prediction engine (218) may include machine learning methodologies using Gaussian process. The Gaussian process is a stochastic process [wherein a dimension of the noise corresponds to one of a …. or a normal probability distribution] (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e., every finite linear combination of them is normally distributed. The distribution of a Gaussian process is the joint distribution of all those (infinitely many) random variables, and as such, it is a distribution over functions with a continuous domain, e.g., time or space…) Kumar and Jain do not express teach the noise corresponding to one of a Student's t-probability distribution. Bes does expressly teach noise corresponding to one of a Student's t-probability distribution, in [0207] The method may be one wherein the noise distribution is uniform [or a normal probability distribution], Gaussian or Laplacian distributed, or a Cauchy distribution, a Logistic distribution, a Student's t distribution [wherein a dimension of the noise corresponds to one of a Student's t-probability distribution], a Gumbel distribution, an Asymmetric Laplace distribution, a skew normal distribution, an exponential power distribution, a Johnson's SU distribution, a generalized normal distribution, or a generalized hyperbolic distribution, or any commonly known univariate or multivariate distribution.; And in [0155] The method may be one wherein the parametric (e.g. factorized) probability distribution is a normal distribution [wherein a dimension of the noise corresponds to one of … a normal probability distribution], a Laplace distribution, a Cauchy distribution, a Logistic distribution, a Student's t distribution [wherein a dimension of the noise corresponds to one of a Student's t-probability distribution], a Gumbel distribution, an Asymmetric Laplace distribution, a skew normal distribution, an exponential power distribution, a Johnson's SU distribution, a generalized normal distribution, or a generalized hyperbolic distribution. Bes, Jain and Kumar are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for retrieving information and developing processing time series data based on distribution used to model the source and noise as disclosed by Bes with the method of developing information retrieval and processing techniques using generator neural networks for predicting failures as collectively disclosed by Jain and Kumar. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Bes, Jain and Kumar noted above; Doing so allows for more accurate modelling of the source and noise while maintaining a close formed solution, (Bes, 1240). Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lin et al. (US 20200265032, hereinafter ‘Lin’) in view of Kumar et al. (US 20230359193, hereinafter ‘Kumar’) in further view of Gupta et al. (US 12354202, hereinafter ‘Gupta’). Regarding claim 6, the rejection of claim 1 is incorporated and Lin in combination with Kumar teaches the apparatus in claim 1, wherein the predetermined change measure includes one of: . (in [0067] Once the discriminator network 414 has been trained (e.g., it has converged), then the detector 400 moves to the detection module 404 where the trained version (420) of the discriminator network is used to determine if there are differences between dataset 406 and dataset 416 (the two subgroups of an original dataset) using conditional information 408 and 418. The level of difference between datasets 406 and 416 is metric 422 (e.g., the distance between the difference distributions) that is then passed to evaluator 110 for further processing. In other words, the discriminator network 414 may be used to create a metric that measures the critical loss distribution between the two subgroups of the dataset that is supplied to the GAN detector. This may be represented by the following equation:) While Lin discloses the use of distance metric. Lin and Kumar do not expressly teach the distance metric including one of Euclidean distance, absolute distance, Mahalanobis distance, cosine similarity, hamming distance, Minkowski distance, Jaccard index, and Haversine distance. Gupta does expressly discloses distance metric including one of Euclidean distance, absolute distance, Mahalanobis distance, cosine similarity, hamming distance, Minkowski distance, Jaccard index, and Haversine distance, in 15:16-27: Some embodiments of the present invention maintain visual identity without modifying the architecture of the GAN. To do so, the present invention compares perceptual loss and/or prediction loss on the input and output data. These losses can be used to update the weights of the GAN. Perceptual loss is calculated by comparing the perceptual layer(s) for the network(s) of the input image and generator output based on Euclidean distance, cosine distance, Manhattan distance [distance metric including one of Euclidean distance, .. cosine similarity, hamming …], etc. Prediction loss is calculated by comparing the “final output” prediction based on a distance (Euclidean distance, cosine distance, Manhattan distance, etc.) or accuracy. Gupta, Kumar and Lin are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for developing information retrieval and processing techniques using generator neural networks for modeling and generating unseen data features as disclosed by Gupta with the method of analyzing such datasets to determine whether there have been any changes or alternations using generator neural networks as collectively disclosed by Kumar and Lin. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Gupta, Kumar and Lin noted above; Doing so helps implementing generative adversarial network models to improve temporal consistency across a series of successive frames, (Gupta, 2:25-44). Regarding claim 16, the limitations are similar with claim 6 and are thus rejected under the same rationale. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Kumar et al. (US 20230359193, hereinafter ‘Kumar’) in view of Jain et al. (US 20230334299, hereinafter ‘Jain’) in further view of Schiegg et al. (US 20220180249, hereinafter ‘Martin’). Regarding claim 7, the rejection of claim 1 is incorporated and Kumar in combination with Jain teaches the apparatus in claim 1, wherein the second neural network of the GAN is a critic GAN network. (in [0089] Discriminator D is a neural network [wherein the second neural network of the GAN is a critic GAN network] that maps a derived feature vector to single scalar value D(.). The discriminator output D(.) can be interpreted as probability that the given input to the discriminator D was a feature vector from training data belonging to good working condition of the well or generated G(z) by the generator G. D and G are simultaneously optimized through the below two player minimax game with value function V(G, D). [0090] The discriminator is trained to maximize the probability of assigning good working condition training examples the “good” and samples from p.sub.g the “failure” label. The generator is simultaneously trained to fool D via minimizing V(G)=log(1−D(G(z))) which is equivalent to maximizing V(G)=D(G(z)). During adversarial training the generator improves in generating derived features in good condition and the discriminator progresses in correctly identifying good and not good features [wherein the second neural network of the GAN is a critic GAN network].) Examiner notes that the discriminator neural network operates as a critic and is within the scope of claimed critic GAN. Additionally, Martin teaches the discriminator neural network operates as a critic, in [0086] In the example of FIG. 1A, the cGAN objective is given real data x.sub.1:Ti˜custom-character.sub.x, and learns to draw samples from custom-character.sub.x. A generator model (network) 10a g(z.sub.1:T, c.sub.1:T) is configured to sample from an implicitly induced distribution custom-character.sub.x. A minimization is performed to minimize the discrepancy div(custom-character.sub.x, custom-character.sub.x) between the real and induced distributions by adversarial training. In an example, the discrepancy may be reduced by Jensen-Shannon divergence, Wasserstein distance, or Maximum Mean Discrepancy (MMD). Given a generator model g and a discriminator model (critic) ƒ [wherein the second neural network of the GAN is a critic GAN network], this corresponds to the minimax objective. Martin, Jain and Kumar are analogous art because both involve developing information retrieval and processing techniques using machine learning systems and algorithms. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of the prior art for retrieving information and developing processing by developing and applying a generator trained using the discriminator is used to generate synthetic data samples, as disclosed by Martin with the method of developing information retrieval and processing techniques using generator neural networks for predicting failures as collectively disclosed by Jain and Kumar. One of ordinary skill in the arts would have been motivated to combine the disclosed methods disclosed by Martin, Jain and Kumar noted above; Doing so allows for generating a large number of synthetic data points characterizing an aspect of the performance of a target automotive system in a manner that enables various future scenarios can be simulated and statistically evaluated., (Martin, 0003). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Demir et al. (US 20210319090): teaches in abstract a generative adversarial network (GAN) for secure deepfake generation.; wherein using one or more processors to: generate, by a generative neural network, samples based on feedback received from a discriminator neural network and from an authenticator neural network, the generative neural network aiming to trick the discriminator neural network to identify the generated samples as real content samples; digest, by the authenticator neural network, the real content samples, the generated samples from the generative neural network, and an authentication code; embed, by the authenticator neural network, the authentication code into the generated samples from the generative neural network by contributing to a generator loss provided to the generative neural network; generate, by the generative neural network, content comprising the embedded authentication code; O’Donoghue et al. (US 20230119186): teaches in 0012, the GAN machine learning model comprises a generator component and a discriminator component. In some embodiments, the at least one non-transitory memory and the computer program code are configured to, with the at least one processor, cause the apparatus to: generate, using the generator component, one or more synthetic device operation data objects; generate, using the discriminator component, one or more synthetic conformance score data objects based at least in part on the one or more synthetic device operation data objects; and train the discriminator component based at least in part on the one or more synthetic conformance score data objects and the one or more synthetic device operation data objects. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUWATOSIN ALABI whose telephone number is (571)272-0516. The examiner can normally be reached Monday-Friday, 8:00am-5:00pm EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWATOSIN ALABI/ Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
Feb 20, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579409
IDENTIFYING SENSOR DRIFTS AND DIVERSE VARYING OPERATIONAL CONDITIONS USING VARIATIONAL AUTOENCODERS FOR CONTINUAL TRAINING
2y 5m to grant Granted Mar 17, 2026
Patent 12572814
ARTIFICIAL NEURAL NETWORK BASED SEARCH ENGINE CIRCUITRY
2y 5m to grant Granted Mar 10, 2026
Patent 12561570
METHODS AND ARRANGEMENTS TO IDENTIFY FEATURE CONTRIBUTIONS TO ERRONEOUS PREDICTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12547890
AUTOREGRESSIVELY GENERATING SEQUENCES OF DATA ELEMENTS DEFINING ACTIONS TO BE PERFORMED BY AN AGENT
2y 5m to grant Granted Feb 10, 2026
Patent 12536478
TRAINING DISTILLED MACHINE LEARNING MODELS
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
85%
With Interview (+26.3%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 199 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month